The General-Purpose AI Code of Practice (GPAI CoP) [1] is a voluntary tool released by the European Commission on 10 July 2025 to support compliance with the European Union Artificial Intelligence Act (AI Act). It provides operational guidance for providers of general-purpose AI models, particularly in relation to Articles 53 [2] and 55 [3] of the AI Act, which entered into application on 2 August 2025. [4]
The Code is organised into three chapters (Transparency, Copyright, and Safety and Security) and outlines how providers can meet the Act's relevant obligations. [4]
Although non-binding, providers can rely on adherence to the Code, meaning that EU regulators will assume that providers following the Code meet the corresponding legal requirements of the AI Act. As such, signatories to the Code will benefit from reduced administrative burdens and increased legal certainty compared to providers that prove compliance in other ways. While adherence to the Code is voluntary, compliance with the AI Act is not. [4]
The EU AI Act, adopted in 2024, established a risk-based regulatory regime for artificial intelligence in the European Union. [5] The rationale for the GPAI CoP stems from Article 56 [6] of the AI Act, which empowers the EU AI Office to develop a voluntary rulebook to guide how AI model providers can meet their legal obligations – specifically those found in Articles 53 and 55. [7]
Under Articles 53 and 55, developers of general-purpose AI models whose training compute exceeds 1023 floating-point operations (FLOPs) and that are placed on the EU market must meet transparency obligations and put in place a policy for EU copyright law. [4]
Models trained with more than 1025 FLOPs are classified as presenting systemic risk and are subject to enhanced safety requirements. [4] The Commission may also designate a model as presenting systemic risk if it has equivalent impact or capabilities (Annex XIII criteria), even below that compute figure. [7]
Because the AI Act is relatively vague on how model providers should implement these requirements, the Code is meant to help by detailing processes and practices for compliance. [4]
The development of the GPAI CoP was drawn up by 13 independent experts [8] and involved four thematic working groups: Transparency & Copyright, Risk assessment for systemic risk, Technical risk mitigation for systemic risk, and Governance risk mitigation for systemic risk. Each group was coordinated by the European Union Artificial Intelligence Office (EU AI Office), drawing on contributions from nearly 1,000 stakeholders, including AI developers, academics, civil society organisations, national authorities, and international observers. [7]
The Code underwent three earlier iterations in November 2024, December 2024, and March 2025, before the final version was published on 10 July 2025, [7] more than two months later than initially planned. [9] The GPAI CoP will likely be updated continuously by the EU AI Office, alongside other tools such as the training data summary template. [7]
Among U.S.-based technology companies, Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI have signed the GPAI CoP. [10] xAI, founded by Elon Musk, has signed only one of the three chapters, namely the safety and security chapter. [9] Prominent European AI companies that have signed include Aleph Alpha and Mistral AI. [10]
The European Commission maintains an updated list of signatories. [1] As of January 2026, Meta is the most notable company that has declined to sign the Code. [10] Major Chinese AI companies, such as Alibaba, Baidu or Deepseek, have also not signed. [9]
Providers that do not sign the GPAI CoP will still have to adhere to the binding requirements of the EU AI Act. The European Commission has indicated that it may take tougher action against companies that didn't sign the Code. [10]
The first two chapters of the GPAI CoP address transparency and copyright compliance and apply to all GPAI providers. [8] They offer a way to demonstrate compliance with their obligations under Article 53 AI Act. [1]
The Transparency chapter addresses the documentation of a model's capabilities, limitations, and points of contact, and expects providers to make key documentation available to downstream providers. [7] Signatories must also publish summaries of the content used to train their models. [8]
In the Copyright chapter, Signatories commit to follow a policy that aligns with EU copyright law. [7] For example, they commit to mitigating the risk of copyright-infringing output. [8]
The Safety and Security chapter is the most extensive chapter of the Code, and it applies to GPAI models with systemic risk, [7] meaning it's only relevant to the small number of providers of the most advanced models. [1] It specifies how Signatories commit to meeting Article 55(1) obligations to: [4]
The chapter outlines a comprehensive risk management process that must be applied before major deployment decisions, such as releasing a new systemic-risk GPAI model in the EU market, or substantially updating an existing one. [4]
Signatories commit to identifying systemic risks of their model, analysing and evaluating them, determining whether risk levels are acceptable, and implementing mitigation measures if necessary. This process should be repeated until models achieve an acceptable level of risk across all identified risks. [4]
Signatories commit to analysing and evaluating at least four “specified” categories of systemic risk: [4]
They are also expected to identify other systemic risks to public health, safety, and fundamental rights. The Code instructs providers to consider model capabilities, propensities, and affordances in this identification. [4]
Signatories commit to developing risk scenarios illustrating how identified risks could materialise in real-world conditions. [4]
After identifying potential systemic risks, Signatories commit to analysing and evaluating the risks in order to determine whether they are acceptable or not, drawing on scientific literature, training data analysis, incident databases, expert consultation, and other sources. [4]
They also commit to conducting state-of-the-art model evaluations such as benchmarking, red teaming, and human uplift studies, targeting each risk. [4]
The risk analysis process is interconnected: insights from risk modelling should inform model evaluation design, while post-market monitoring should feed back into ongoing analysis. Signatories commit to ultimately estimating the likelihood and severity of each systemic risk. [4]
Appendix 3.5 of the Safety and Security chapter requires signatories to ensure that independent external evaluators conduct model evaluations. Signatories may claim an exemption from this requirement only if they can demonstrate that their model is “similarly safe” to another model that has already been shown to comply with the Code, [11] or if they are unable to appoint an appropriately qualified evaluator. [12]
The determination of “similarly safe” is based on comparable performance on benchmarks and the similarity of other model characteristics, such as their architecture. The CoP acknowledges that this kind of information is typically available only for models by the same provider, or potentially for open-weights or open-source models. [11]
The Code requires providers to compare estimated risks against predefined acceptance criteria, which must be measurable, based on model capabilities, and defined preemptively. While providers get to determine the level of risk they deem acceptable themselves, the pre-defined criteria and acceptance thresholds ensure providers cannot adjust their level of tolerance flexibly ahead of deployment decisions. [4]
Only if all risks are below acceptable levels should a model be deployed. [4]
The Code mandates ongoing risk management throughout the model lifecycle, including light-touch evaluations, continuous mitigation, post-market monitoring, and incident tracking and reporting. [4]
It further requires organisational governance structures assigning responsibility for risk management and expects providers to promote a “healthy risk culture,” including informing employees about the whistleblower protection policy, allowing internal challenges of decisions concerning systemic risk management, and committing to not retaliating against employees who disclose concerns about systemic risks to oversight authorities. [4]
Signatories commit to creating two types of documentation: [4]
Signatories commit to notifying the EU AI Office of their Safety and Security Framework and Safety and Security Model Reports. [13]
Public disclosure is limited as providers are required to publish summaries of their safety and security framework and model reports only when a model may pose greater risk than comparable models already available in the EU, and only to the extent necessary to assess or mitigate systemic risks. This limited disclosure significantly reduces the extent of public scrutiny and evaluation of providers’ safety and security practices. [4]
Under Article 53 and the transparency requirements set out in the Code, downstream providers are entitled to receive a “downstream package” from the GPAI provider. This package includes a completed Model Documentation Form and key information required under Annex XII of the AI Act, such as a description of the model, intended tasks, performance, architecture, licensing terms, and integration specs. [7]
The Transparency chapter also requires GPAI providers to supply additional information requested by downstream providers within 14 calendar days, provided that the requested information is necessary for understanding the model's capabilities and limitations and for enabling downstream providers to comply with their obligations under the AI Act. [7]