The EU Artificial Intelligence Act represents a groundbreaking regulatory framework designed to govern the development, deployment, and use of AI systems across the European Union. It aims to promote trustworthy AI by addressing risks to health, safety, and fundamental rights while encouraging innovation and competitiveness. The Act categorizes AI systems based on risk levels: unacceptable risk (prohibited), high-risk (strict requirements), limited risk (transparency obligations), and minimal risk (voluntary codes). It applies to providers, deployers, importers, and distributors of AI, with extraterritorial reach for systems affecting EU users. Penalties for non-compliance can reach up to €35 million or 7% of global annual turnover and is similar to GDPR fines for violations. The biggest headlines are around AI Governance and figuring out ways to be compliant in a world where we are starting to use AI with everything.
The legislative journey began with the European Commission’s proposal in 2021, amid growing concerns over AI’s ethical and societal impacts. After extensive negotiations, a political agreement was reached in late 2023, leading to formal adoption in 2024. The Act was published in the Official Journal on 12 July 2024 and entered into force on 1 August 2024, marking the start of a staggered implementation to allow stakeholders time to adapt. As of September 2025, early provisions like prohibitions on unacceptable-risk AI and AI literacy requirements have already taken effect (since February 2025), while governance rules and general-purpose AI (GPAI) obligations became applicable in August 2025. Full compliance for most high-risk systems is required by August 2026, with extended transitions for certain embedded systems until 2027 or later.
This phased rollout includes the establishment of national authorities, the European AI Office, regulatory sandboxes, and codes of practice to guide implementation. The Commission will issue guidelines, standards, and reports to refine the framework over time. Challenges include harmonizing with existing laws, clarifying classifications (e.g., via guidelines by February 2026), and ensuring global alignment, as non-EU providers must appoint representatives for GPAI models. Ongoing evaluations every few years will assess effectiveness and potentially amend prohibitions, high-risk lists, and governance structures.
EU AI Act Timeline of Key Milestones and Up Coming Deadlines
The following table outlines the major dates in the EU AI Act’s history and implementation, from proposal to future reviews. Dates reflect the phased approach, with some obligations applying retroactively or with grace periods.
Date | Event/Deadline Description |
---|---|
21 April 2021 | European Commission publishes the original draft proposal, initiating the legislative process. |
9 December 2023 | Political agreement reached between the European Parliament and Council on the final text. |
13 March 2024 | European Parliament formally adopts the AI Act. |
21 May 2024 | Council of the EU gives final approval, completing the adoption phase. |
13 June 2024 | The AI Act is formally signed. |
12 July 2024 | Publication in the Official Journal of the European Union. |
1 August 2024 | Entry into force of the AI Act; no requirements apply immediately, but the implementation countdown begins. |
2 November 2024 | Member States must identify and list authorities for fundamental rights protection, notifying the Commission. |
2 February 2025 | Prohibitions on unacceptable-risk AI systems and AI literacy requirements take effect (Chapters 1 and 2). |
2 May 2025 | Codes of practice for GPAI models must be finalized. |
2 August 2025 | Governance rules, penalties, and obligations for GPAI models apply; non-EU providers must appoint an EU representative and comply with transparency rules. Member States designate national competent authorities and report on resources (and every two years thereafter). Annual Commission review on prohibitions begins (and every year thereafter). |
2 February 2026 | Commission provides guidelines on AI system classification (Article 6) and post-market monitoring. |
2 August 2026 | Majority of the AI Act becomes fully applicable, including core obligations for high-risk AI systems (e.g., risk management, documentation, data governance, human oversight). Applies to systems placed on the market before this date if significantly changed. National AI regulatory sandboxes must be operational. |
2 August 2027 | Rules for high-risk AI systems embedded in regulated products (Annex I) fully apply. Pre-existing GPAI models must comply by this date. |
2 August 2028 | Commission evaluates the AI Office’s functioning, voluntary codes of conduct (and every three years thereafter), and submits reports on amendments to high-risk lists, transparency measures, and governance (and every four years thereafter). Progress report on standardization for energy-efficient GPAI development due (and every four years thereafter). |
1 December 2028 | Commission reports on delegation of power (Article 97). |
31 December 2030 | Certain large-scale IT systems (Annex X) placed on the market before 2 August 2027 must be brought into compliance. |
EU AI Act Compliance Checker
As of September 2025, Captain Compliance is the only software solution that we know of that provides compliance checks around the EU AI Act. Officially from the ICO and EU officials there is not an official “compliance checker” tool, but several guidelines, codes of practice, and resources are available to assist providers and deployers in assessing and ensuring compliance if you want to try it manually should you not want to retain Captain Compliance’s help. The European Commission has issued the General-Purpose AI (GPAI) Code of Practice, a voluntary tool offering practical guidance on safety, transparency, and copyright obligations for GPAI models, which became available in July 2025 and helps with rules entering application on 2 August 2025.
Key Resources for Compliance
- Guidelines for Providers of GPAI Models: Issued by the Commission in July 2025, these clarify obligations under the AI Act.
- Guidelines on Transparent AI Systems: Provide instructions for complying with transparency obligations, published in September 2025.
- European AI Scanner: A RegTech tool designed for EU AI Act compliance, offering a comprehensive solution to align with the Act and internal policies.
- AI Auditing Tools: From the European Data Protection Board, aimed at evaluating GDPR compliance in AI systems.
Organizations are encouraged to consult the official EU websites for updates, as the European AI Office oversees implementation and may release additional tools in the future.
EU AI Act Article 4
Article 4 of the EU AI Act focuses on AI literacy, requiring providers and deployers to ensure sufficient knowledge and skills among their staff and relevant persons. The full text states: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
Key Requirements of EU AI Act Article 4
- Definition of AI Literacy: Encompasses skills, knowledge, and understanding to make informed decisions about AI deployment, interpret outputs, and identify risks.
- Obligations: Organizations must implement training programs tailored to roles, considering technical backgrounds and usage contexts. This became effective on 2 February 2025.
- Best Practices: Include developing workforce skills for ethical AI use, with flexibility based on organization size and AI complexity.
- Related Recitals: Recital 56 emphasizes informed decision-making; others (45-53) link to broader compliance.
Non-compliance could lead to penalties, making AI literacy a core part of risk management strategies.
EU AI Act Article 6
Article 6 establishes the classification rules for high-risk AI systems, determining which systems require stringent obligations. The key conditions are: (1) AI systems that are safety components or products under Union harmonization legislation in Annex I, requiring third-party conformity assessment; (2) AI systems listed in Annex III. Derogations apply if the system does not pose significant risks, such as for narrow procedural tasks or preparatory activities.
Classification Criteria and Implications
- High-Risk Categories: Include biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice (per Annex III).
- Derogations: Systems not considered high-risk if they perform limited tasks without influencing decisions significantly.
- Requirements for High-Risk Systems: Involve risk management, data governance, transparency, and human oversight, applicable from August 2026.
- Guidelines: Commission guidelines on classification expected by February 2026.
If an organization you work with needs help with the EU AI Act and wants to become compliant book a demo below with one of our privacy and compliance superhero team members.