EU AI Office | |
Agency overview | |
---|---|
Formed | 21 February 2024 |
Jurisdiction | European Union |
Headquarters | Brussels, Belgium |
Employees | 60 (2024), 140+ (projected) |
Agency executive |
|
Parent department | Directorate-General for Communications Networks, Content and Technology |
Parent agency | DG CONNECT [2] , European Commission |
Website | digital-strategy |
The European Artificial Intelligence Office (also known as the EU AI Office or AI Office) is a European Union office established within the European Commission that supports the implementation of the Artificial Intelligence Act and, at EU level, supervises and enforces the obligations for providers of general‑purpose AI (GPAI) models. [3] [4] The European AI Office plays a key role in implementing the AI Act (especially for general-purpose AI), fostering the development and use of trustworthy AI, and international cooperation. The Office also fosters research and innovation in trustworthy AI and contributes to international cooperation on AI governance. [3]
With a provisional agreement on the Act reached in December 2023, the Commission's establishment of the AI Office comes before the formal adoption of the EU AI Act, which is expected to happen in the upcoming months. This was a deliberate move from the commission to allow for the preparation of the implementation of the forthcoming AI Act as soon as possible, once formally adopted. [5]
The Commission formally established the AI Office by Decision of 24 January 2024 (published in the Official Journal on 14 February 2024, entering into force on 21 February 2024). [6] The Artificial Intelligence Act entered into force on 1 August 2024, [7] after which the Commission announced and began setting up the AI Office’s structure and staffing inside DG CONNECT. [2]
The EU executive went ahead to greenlight the appointments and units which will take effect on 16 June 2024. [1]
The Office’s legal basis is the Commission Decision establishing the European AI Office. [2] Under the AI Act, the European Commission has exclusive powers to supervise and enforce the obligations for providers of GPAI models (Chapter V), and it entrusts the implementation of these tasks to the AI Office; national authorities enforce most other parts of the Regulation. [8] [4] The AI Act provides for fines for non‑compliance, including up to 3% of worldwide annual turnover or €15 million (whichever is higher) for certain infringements by GPAI providers, and up to €35 million or 7% for specified prohibited practices. [9] [7]
Lucilla Sioli, currently Director for AI and Digital Industry within the European Commission and an EU official since 1997, is formally leading the AI Office. Lucilla holds a PhD in economics from the University of Southampton (UK) and one from the Catholic University of Milan (Italy), and has been a civil servant with the European Commission since 1997. [10]
The AI Office sits within DG CONNECT and is organised into five units—Excellence in AI & Robotics; Regulation & Compliance; AI Safety; AI Innovation & Policy Coordination; and AI for Societal Good—supported by a Lead Scientific Advisor and an Advisor for International Affairs. The Office plans to employ over 140 staff, including technical specialists, lawyers and policy experts. [3] Independent reporting during the set‑up noted the five‑department structure, identified Sioli as head, and discussed resourcing and recruitment needs. [1] [11]
The Office coordinates EU‑level governance and practical tools for the AI Act’s rollout, particularly for GPAI models. In July 2025, the Commission received and published the General‑Purpose AI Code of Practice, a voluntary instrument developed by independent experts with input from more than 1,000 stakeholders, covering transparency, copyright, and safety/security. [12] [13] The Commission also issued guidelines clarifying who must comply with GPAI obligations and how key concepts apply, [14] [15] and provided a template for GPAI providers to publish summaries of training data to support transparency and copyright compliance. [16] On 1 August 2025 the Commission confirmed that the GPAI obligations would start to apply from 2 August 2025, with enforcement by the Commission (via the AI Office) from 2 August 2026; models placed on the market before 2 August 2025 must comply by 2 August 2027. [17] [15]
The Office supports and works with new AI Act bodies: the European Artificial Intelligence Board (comprising Member State representatives), a Scientific Panel of independent experts, and an Advisory Forum of stakeholders. It also cooperates with the European Centre for Algorithmic Transparency (ECAT). [3] [4] [18]
The European Commission has established a new EU level regulator, the European AI Office, which will be situated within the European Commission's Directorate-General for Communication Networks, Content and Technology (DG CNECT). The AI Office role will be monitoring, supervising, and enforcing the AI Act requirements on GPAI models and systems across the 27 EU Member States. [19]
This includes analysing emerging unforeseen systemic risks originating from GPAI development and deployment, and also developing capabilities evaluations, conducting model evaluations and investigating incidents of potential infringement and non-compliance. [20]
The Commission stated that they have to draft around 70 implementation and delegation acts. [21] [22]
The implementation timeline includes:
News and analysis outlets have treated the Office as a central EU‑level regulator for GPAI and as a coordination hub for the AI Act, while also reporting questions about staffing and resources during launch and early implementation. [23] [11] [24] [25]