The European Union continues to lead in regulating artificial intelligence with the AI Act, which came into partial effect last year. On July 18, 2025, the European Commission released comprehensive guidelines clarifying the obligations for providers of general-purpose AI (GPAI) models. These guidelines, evolving from a prior working paper and public consultations, aim to demystify the scope of responsibilities under Chapter V of the AI Act. They emphasize a balanced approach to innovation and risk management, particularly for models that underpin a wide array of AI applications. In this EU commission guideline piece, we explore the key elements while uniquely tying them to privacy considerations under regulations like the GDPR, and drawing parallels to global frameworks such as those from NIST. This integration highlights how AI governance increasingly intertwines with data protection to foster trustworthy systems and if you’d like a consultation from a privacy and AI compliance expert you can book a demo with a Captain Compliance superhero team member today.
Understanding GPAI Models: Definitions and Scope
At the heart of the guidelines is a refined definition of GPAI models, drawn from Article 3(63) of the AI Act. These are AI models trained on vast datasets to perform diverse tasks, from text generation to image creation, without being tailored to specific uses. A key indicator is the computational threshold of 10²³ floating-point operations (FLOP), which signals a model’s generality if combined with capabilities in areas like language processing or multimedia generation. The guidelines eschew rigid legal presumptions, instead offering an “indicative criterion” for classification, allowing flexibility for evolving technologies.
The lifecycle of a GPAI model encompasses initial large-scale pre-training and subsequent fine-tuning by the original provider, without spawning a new model. However, downstream users who significantly modify the model—using more than one-third of the original compute—become providers of a distinct GPAI model, inheriting full obligations. This delineation addresses challenges in supply chains, ensuring accountability across the ecosystem.
Obligations for All Providers of GPAI Models
Regardless of size or risk level, all GPAI providers face baseline requirements to promote transparency and accountability. These include maintaining detailed technical documentation on model architecture, training processes, and performance metrics, which must be disclosed upon request. Providers are also mandated to publish summaries of training data, comply with EU copyright laws (e.g., respecting opt-outs for data usage), and ensure models are identifiable when integrated into downstream AI systems.
A notable pathway for compliance is adherence to a forthcoming Code of Practice, which simplifies demonstrations of due diligence. Non-adherents must still report their risk management measures to the AI Office, potentially inviting scrutiny. Open-source models enjoy exemptions if they meet “free and open-source” criteria, including unrestricted access to weights and parameters, though safety-related license restrictions are permissible without forfeiting this status.
Enhanced Obligations for GPAI Models with Systemic Risks
For models posing “systemic risks”—those exceeding the 10²³ FLOP threshold and demonstrating high-impact capabilities—additional safeguards apply. Providers must notify the Commission, which evaluates factors like advanced reasoning or real-world manipulation potential. Contesting this classification shifts the burden to the provider to prove otherwise.
Obligations include rigorous model evaluations, adversarial testing, and risk mitigation strategies, such as cybersecurity protocols and incident reporting for “serious incidents” like widespread harms. Providers must also engage in ongoing monitoring and cooperate with authorities, underscoring a proactive stance against existential threats from powerful AI.
Compliance Timelines and Enforcement Mechanisms
The guidelines outline phased implementation: Models placed on the market before August 2, 2025, have until August 2, 2027, to comply, with leniency for legacy models where retraining is impractical—provided justifications are documented. Enforcement kicks in from August 2, 2026, via the AI Office, adopting a collaborative, proportionate approach. Fines may be mitigated for Code of Practice adherents, while proactive reporting and confidentiality protections encourage voluntary compliance.
Challenges persist, such as estimating compute with a 30% error margin and adapting to rapid tech changes, but the guidelines provide practical examples, like handling synthetic data or model merging.
Tying into Privacy Regulations and Broader AI Governance
While the guidelines focus on AI-specific risks, they inherently intersect with privacy frameworks like the GDPR, which governs personal data processing in AI training and deployment. Transparency obligations, such as data summaries, align with GDPR’s principles of lawfulness and accountability, helping prevent unauthorized use of personal information and enabling data subjects’ rights. For instance, copyright compliance indirectly supports privacy by respecting content creators’ controls over sensitive data.
Globally, these EU guidelines resonate with NIST’s AI Risk Management Framework, which emphasizes trustworthy AI through governance and measurement. Organizations can integrate EU obligations with NIST’s voluntary tools for a holistic approach, addressing both cyber risks and privacy harms.
Differential Privacy as a Complementary Tool
In mitigating privacy risks within GPAI models, techniques like differential privacy can anonymize training data effectively. For insights into NIST’s guidance on this, refer to: Understanding NIST SP 800-226: A Guide to Evaluating Differential Privacy Guarantees.
GPAI Models
- GPAI models are defined by their broad capabilities and high computational thresholds, with guidelines providing flexible classification criteria to adapt to technological advancements.
- All providers must ensure transparency through documentation, data summaries, and copyright compliance, with open-source exemptions available under specific conditions.
- Systemic risk models face stricter requirements, including evaluations, testing, and incident reporting to mitigate potential large-scale harms.
- Compliance timelines offer grace periods for existing models, and enforcement emphasizes collaboration via the AI Office and Codes of Practice.
- Privacy intersections with GDPR highlight the need for integrated approaches, complemented by tools like NIST frameworks and differential privacy techniques for robust data protection.
- Overall, the guidelines balance innovation with accountability, encouraging global alignment in AI governance to protect society while fostering ethical development.
A Path to Responsible AI Innovation
The European Commission’s guidelines mark a milestone in regulating GPAI models, offering clarity amid complexity while promoting ethical AI. By weaving in privacy safeguards and aligning with international standards, they pave the way for providers to innovate responsibly. As enforcement looms, stakeholders should prioritize integrated compliance strategies to navigate this evolving landscape, ensuring AI benefits society without compromising fundamental rights.