France’s data protection authority, the Commission Nationale de l’Informatique et des Libertés (CNIL), has opened a new phase of its PANAME project — an initiative designed to test and refine a practical audit tool for assessing whether artificial intelligence models comply with the GDPR.
PANAME, which stands for Privacy Auditing of AI Models, aims to create an open-source technical library capable of evaluating whether AI systems improperly retain or expose personal data from their training datasets. CNIL is now inviting public and private organizations across the European Union to participate in pilot testing of the tool.
Why AI Privacy Auditing Is Becoming Critical
As generative AI and machine learning systems become deeply embedded in digital services, regulators are increasingly focused on whether these models “memorize” personal data contained in training materials. Research has shown that certain AI models can, under specific conditions, reveal fragments of personal information if probed strategically.
Under the GDPR, if a model can be shown to retain or expose personal data, organizations deploying it may face additional compliance obligations — including lawful basis assessments, transparency requirements, and potentially Data Protection Impact Assessments (DPIAs).
The challenge is technical: how do you reliably test whether an AI model leaks personal data? PANAME is designed to answer that question.

What the PANAME Tool Will Do
The project seeks to build a standardized set of technical tests that organizations can use to evaluate AI systems. Rather than relying solely on theoretical legal analysis, PANAME focuses on measurable technical indicators of privacy risk.
Core Objectives of the Project
- Develop a shared library of privacy auditing tests for AI models
- Evaluate whether models retain or reproduce personal data from training datasets
- Help organizations assess compliance with GDPR data minimization and confidentiality principles
- Create practical tools that bridge legal requirements and technical implementation
By packaging these capabilities into an accessible framework, CNIL aims to lower the barrier for companies that lack in-house AI research teams but still need defensible compliance practices.
From Research Concept to Real-World Testing
Launched in mid-2025, the PANAME initiative is structured as an 18-month collaboration between CNIL and several technical and research partners. The current call for participation marks the beginning of the pilot phase, where selected organizations will test the tool in operational environments.
Participants will contribute feedback on usability, accuracy, and integration challenges. Their input will shape the final version of the audit library before broader publication.
This collaborative approach reflects a broader European strategy: regulation should not exist in isolation from technical feasibility. Instead, compliance frameworks must be built alongside engineering realities.
Closing the Gap Between AI Innovation and GDPR Obligations
One of the most persistent tensions in AI governance is the gap between legal expectations and technical capability. While the GDPR has always applied to AI systems that process personal data, organizations often struggle to determine when a model crosses that threshold.
If a model cannot reproduce personal information, compliance risk may be significantly lower. If it can, the regulatory picture changes immediately.
PANAME aims to introduce greater clarity into this grey zone by providing structured methods to test and document findings. That clarity benefits both regulators and organizations: it reduces uncertainty, strengthens accountability, and supports more consistent enforcement outcomes.
Part of Europe’s Broader AI Governance Strategy
The initiative arrives as Europe continues refining its broader AI regulatory framework. With the EU AI Act introducing new governance layers and risk-based obligations, technical auditability is becoming a core expectation.
PANAME complements this trajectory by focusing specifically on privacy — ensuring that AI systems can be evaluated against GDPR standards in a reproducible and transparent manner.
Rather than relying solely on policy statements or theoretical interpretation, the project emphasizes practical, testable compliance.
Who Can Participate
Organizations established within EU member states are eligible to express interest in participating in the pilot phase. Both public institutions and private companies are encouraged to apply.
Selected participants will work directly with project partners to refine the audit tool and validate its effectiveness across different types of AI systems.
Why This Matters for AI Developers and Compliance Teams
As AI becomes foundational infrastructure across industries — from healthcare and finance to education and public administration — questions about training data and privacy exposure are no longer abstract concerns. They are operational risks.
PANAME signals that European regulators are moving beyond high-level guidance toward structured, technical enforcement readiness. AI developers and compliance professionals alike should expect auditability and demonstrable privacy safeguards to become standard expectations rather than optional enhancements.
In short, the future of trustworthy AI in Europe will not rely solely on policy declarations. It will depend on measurable, testable, and defensible compliance tools — and PANAME is a clear step in that direction.