Age Assurance Technologies, Emerging Standards, and Risk Management

Table of Contents

Ask most people what “age verification” means and they will describe a dropdown menu asking for a birth year, or a checkbox declaring they are over eighteen. Both are security theater. Both have been the de facto standard for online age gating for the better part of two decades. And both are now being rapidly displaced — by legislation, by litigation, by child safety advocacy, and by a technology sector that has finally produced credible alternatives — in favor of a more demanding and more consequential discipline called age assurance.

For privacy professionals, age assurance is simultaneously one of the most urgent and most technically complex compliance challenges. The regulatory pressure is accelerating globally. The technology landscape is evolving faster than most compliance programs can track. A landmark international standard has just been published, made freely available in a move that signals just how serious the global regulatory community is about adoption. And the fundamental tension at the heart of age assurance — that the more reliably you verify age, the more personal data you tend to collect, and the more personal data you collect, the more privacy risk you create — has not been resolved. It has been redefined, and the redefinition has significant implications for how organizations should design and govern their age assurance implementations.

This article is a practitioner’s guide to the current state of age assurance technologies, the emerging standards that are reshaping deployment requirements, and the risk management framework that privacy professionals need to apply when advising organizations on this domain.

What Age Assurance Actually Is — And Why It Is Not Just Age Verification

The terminology matters here. “Age verification” implies a binary determination: the user is above a specific age threshold or they are not. “Age assurance” is broader — it encompasses the full range of approaches through which an organization forms a reasonable belief about a user’s age or age group for the purpose of making eligibility determinations. The distinction is not semantic. It has direct consequences for which methods are proportionate in which contexts, and the new international standard adopts the age assurance framing deliberately.

The updated Future of Privacy Forum infographic on age assurance identifies four approaches: declaration, estimation, verification, and inference. The original framework covered three — declaration, estimation, and verification — but the updated version adds inference as a fourth category, which draws reasonable conclusions about a user’s age range based on behavioral signals, account characteristics, or financial transactions.

Breaking these down:

  • Self-declaration is the cheapest, least privacy-invasive, and most easily circumvented method. A user states their age — via a form field, a checkbox, or a date of birth entry — and the platform accepts it. Self-attestation is obviously security theater and not worth pursuing for any context where reliable assurance actually matters, but it remains widely deployed as the baseline layer in tiered assurance architectures.
  • Age estimation uses biometric or behavioral signals to infer a user’s likely age range without requiring them to produce identity documents. Facial age estimation — where AI analysis of a selfie produces an estimated age range — is the most mature commercial implementation. AI-based facial age estimation technology has matured faster than most regulators realize, with providers already delivering solutions capable of preventing underage access in retail environments at scale. However, accuracy concerns — particularly near legally significant age thresholds and for demographic groups including young women and non-Caucasian users — remain documented and material.
  • Age verification involves checking a user’s age against authoritative third-party sources: government-issued ID documents, credit bureau records, mobile network operator data, or digital identity credentials. This category provides the highest assurance levels but also creates the most significant data collection, and therefore the most significant privacy exposure. Strict age verification can provide increased accuracy by requiring users to submit hard identifiers, such as a government-issued ID or biometric data. Access to such sensitive and identifying personal information can compromise an individual’s ability to remain anonymous online.
  • Age inference is the newest category and the least formally governed. YouTube’s algorithm inferring a user’s age based on their behavior and preferences is a relatively new example of age inference in practice. Inference operates on behavioral signals — browsing history, content preferences, transaction patterns, account age — rather than direct assurance events. It raises distinct questions about transparency and consent, since users are typically unaware their behavior is being analyzed to make eligibility determinations.

There is no one-size-fits-all age assurance solution. Effective approaches are risk-based, use-case-specific, and privacy-preserving by design, balancing assurance goals against the rights and expectations of users.

The Regulatory Landscape: Global, Accelerating, and Legally Consequential

The regulatory pressure driving age assurance adoption has moved from rhetorical to operational at speed. In 2025, lawmakers and enforcement agencies around the globe kept one issue firmly in the spotlight: the privacy and safety of minors online, and this heightened focus shows no sign of abating.

In the United States, the legislative picture is fragmented but intensifying. Multiple states passed laws requiring app stores to provide age category signals to developers, referred to as app store accountability acts. In December 2025, the House Committee on Energy and Commerce considered a flurry of minor-privacy and online safety legislation, which has been advanced to the House of Representatives. The FTC hosted a workshop specifically on age assurance technologies in January 2026, signaling that federal enforcement attention is actively calibrating. At the state level, attorneys general have been filing actions under both standalone minor safety statutes and existing UDAP frameworks. The Supreme Court’s 2025 decision upholding Texas’s age verification law for adult content websites — finding it did not violate the First Amendment — removed one of the primary constitutional arguments against state-level age verification mandates and is expected to accelerate copycat legislation across states.

In Europe, the Digital Services Act is the primary vehicle, with the European Commission taking an increasingly specific position on implementation. The European Commission published a second version of its age verification blueprint for digital platforms, with five EU countries — Denmark, France, Greece, Italy, and Spain — piloting the solution in 2025 using a secure “mini wallet” compatible with Digital Identity Wallets. The EU Council has separately called for age assurance implementations that allow users to prove they are old enough to use a platform without giving their exact age — a direct endorsement of privacy-preserving zero-knowledge proof approaches. The European Commission’s guidelines on minors emphasize proportionality and a risk-based approach to age assurance under the DSA, acknowledging that not every context requires the same level of assurance.

In the UK, the Online Safety Act 2023 sets relatively clear rules on when age assurance must be used, and Ofcom’s early enforcement has focused on adult content platforms. Aylo announced it will restrict access to its services for new UK users from February 2026, citing failures in the operation of the Online Safety Act — underscoring that even major platforms are finding compliance more demanding than expected. The Ofcom focus is expected to expand from adult content to a broader range of services in 2026.

Australia’s social media ban became effective December 10, 2025, preventing minors under 16 from holding accounts on designated platforms, with responsibility placed on platforms to verify users’ ages. Singapore has taken a different approach, requiring app stores rather than service providers to carry out age assurance under its Code of Practice for Online Safety for App Distribution Services. The diversity of regulatory approaches globally — who must verify, by what method, to what standard, with what accountability — is itself a compliance challenge for organizations operating across multiple jurisdictions.

ISO/IEC 27566-1:2025: The Standard That Changes Everything

The single most significant development in the age assurance governance landscape in late 2025 was the publication of ISO/IEC 27566-1:2025 — the first international standard for age assurance systems. The standard plants a much-needed stake in the ground for age assurance technologies, and should help bring a measure of stability to an industry that has been on a regulatory roller-coaster ride in 2025.

ISO/IEC 27566-1:2025 establishes a framework for age assurance systems and describes their core characteristics, including privacy and security, for enabling age-related eligibility decisions. The language is carefully chosen to provide both solidity and enough breadth to accommodate inevitable change. The standard introduces the term “age-related eligibility” — a deliberate linguistic choice that fills the gap between the binary connotations of “age verification” and the broader concept of assurance-based eligibility determinations. It also provides refined and precise definitions for age verification, age estimation, age inference, and successive validation — addressing the definitional inconsistencies that have complicated international regulatory dialogue.

Critically, the European Commission, International Telecommunication Union, and the Age Verification Providers Association successfully petitioned the ISO and IEC to make the standard freely available — an unusual step, since standards are typically offered at a cost, but one reflecting the view that being effective also means being accessible. A standard that only well-resourced organizations can afford to implement is not a standard that achieves its public interest purpose.

Part 2 of the standard — ISO/IEC 27566-2, covering technical approaches and guidance for implementation — is in development and will provide the more granular technical guidance that implementing organizations need. The 2026 Global Age Assurance Standards Summit in Manchester in April is expected to be a major forum for practitioners working through implementation of 27566-1 and contributing to the Part 2 development process.

Emerging Technologies: The Privacy-Preserving Frontier

The technical evolution of age assurance is moving rapidly, and the direction of travel is clearly toward privacy-preserving architectures that minimize the personal data required to make an age determination. For privacy professionals, understanding the emerging technology landscape is essential to advising organizations on which approaches are both legally compliant and genuinely privacy-protective.

Zero-Knowledge Proofs

Zero-knowledge proofs (ZKPs) are the technology that regulators, privacy advocates, and much of the standards community have converged on as the most promising direction for privacy-preserving age assurance. ZKPs are a cutting-edge cryptographic method that allows one party to prove to another that a statement is true without conveying any other information. In the context of age verification, ZKP-based systems can confirm that a user is above a certain age without sharing the actual age or other sensitive data.

The privacy implications are transformative. A ZKP-based age assurance system produces a binary attestation — “this user is over eighteen” — without transmitting the user’s date of birth, identity document details, or any other personal information to the platform. The platform receives confirmation of eligibility without receiving the underlying data that would enable tracking, profiling, or re-identification. A ZKP system can confirm that a user is above a certain age threshold without sharing the actual age or other sensitive data, a privacy-first approach ideal for regions with stringent data protection laws that could eventually become the gold standard for age assurance.

Apple and Google have both moved into this space. In February 2025, Apple announced an update to its age-ratings policies allowing parents to share their child’s age range with app developers. In March 2025, Google released a legislative framework supporting zero-knowledge proof age signals with parental consent, allowing parents to share ZKP age signals with developers. Both companies have rolled out software aligned with W3C standards to enable ZKP-verified credential sharing for age verification in their digital wallets.

The euCONSENT project’s AgeAware network — described as a “standards-based, anonymised, interoperable age verification network” — and OpenAge, built on passkey technology by k-ID, both launched in late 2025 as reusable ZKP-based age assurance systems. These allow users to prove their age once and apply that proof across different platforms and services, with the reusability model closely related to interoperability.

However, ZKPs are not a complete solution in isolation. ZKPs alone are not a digital ID solution to protecting user privacy. They address the data minimization problem at the point of age assertion, but they depend on the upstream credential — the government ID, the credit bureau record, the digital identity wallet — being trustworthy, privacy-protective, and accessible to all users. Where that upstream credential does not exist or requires centralizing sensitive identity data in a government or private-sector database, the privacy problem is not eliminated but shifted.

On-Device Processing

A second architectural approach that privacy professionals should understand is on-device age estimation — where the biometric processing required to estimate age happens on the user’s device rather than on a remote server, and no image or biometric data is transmitted. Privately was the only provider offering fully on-device processing in the Australian Government’s Age Assurance Technology Trial, ensuring no image or biometric data of the consumer ever leaves their devices, offering the highest data privacy guarantees in the industry.

On-device processing eliminates the centralized database risk that makes conventional biometric age estimation systems a target for breaches. The age determination is made locally, a token or signal is produced, and only that signal is transmitted to the platform. This architecture aligns with the GDPR’s data minimization principles in a way that cloud-based biometric processing typically does not, and it addresses the specific concern that facial estimation systems require centralized storage of biometric data to train and improve accuracy, creating privacy problems through that centralized storage.

Age Tokens and Reusable Credentials

The core idea of a reusable age check is that a user verifies their age once and can then apply proof of that single verification across different platforms and services. The privacy benefits of this model are significant: rather than submitting identity documents to dozens of different platforms, each of which then stores or processes that data independently, a user submits once to a trusted verifier and receives a reusable token that platforms can accept without accessing the underlying identity data.

The tension in this model is governance. Who issues the tokens? Who audits the issuers? What happens when a token is compromised or a user’s eligibility status changes? These are not hypothetical concerns — they are the operational questions that the euCONSENT governance model and the emerging ISO/IEC 27566 framework are specifically designed to address.

The Risk Landscape: What Can Go Wrong

It’s important to view the risks associated with age assurance, including excessive data collection and retention, secondary data use, lack of interoperability, false positives and negatives, data breaches, and user acceptance challenges.

For privacy professionals conducting risk assessments on age assurance implementations, the following risk categories warrant specific attention:

Excessive data collection. The most reliable age assurance methods tend to require the most personal data. Document-based verification creates databases of identity documents that are high-value breach targets. Document-based verification requires platforms to collect and store sensitive identity documents like driver’s licenses and passports. These centralized databases become attractive targets for hackers. When breached, the stolen documents enable identity theft, financial fraud, and long-term harm that extends far beyond the compromised platform. The proportionality question — does the level of assurance required by the use case actually justify the data collection required to achieve it — must be answered explicitly in any compliant implementation.

Secondary data use. Age assurance systems collect data for a specific purpose: determining whether a user meets an age threshold. That data — biometric images, identity document scans, behavioral signals — may be repurposed for advertising targeting, law enforcement assistance, demographic analytics, or model training without users’ knowledge or consent. Secondary data use of assurance data is identified as one of the emerging risks of modern age assurance systems, and organizations must implement technical controls — not just policy commitments — to prevent it.

Loss of anonymity through linkage. Loss of anonymity through linkage is a specific risk in reusable token architectures: if the same token is presented across multiple platforms, it may be possible for those platforms, or a central token issuer, to build a cross-platform behavioral profile of the user. The design of token systems must specifically address this linkage risk, and unlinkable token architectures — where the token presented to Platform A cannot be correlated with the token presented to Platform B — are the appropriate technical standard.

False positives and negatives. Accuracy failure modes in age estimation systems have documented demographic disparities. Significant accuracy challenges exist — especially for individuals close to the age threshold, young women, and non-Caucasian users. Inaccuracies could wrongly block some teens or allow younger children access. False positives — systems that incorrectly identify adult users as minors and deny them access — create both user experience problems and potential discrimination claims. False negatives — systems that incorrectly identify minors as adults — create regulatory exposure. Both failure modes need to be understood and managed.

Presentation attacks. Anti-circumvention is a persistent challenge in age assurance. Circumvention risks such as presentation attacks or shared-device misuse undermine the reliability of age assurance systems regardless of how sophisticated the underlying technology is. Presentation Attack Detection (PAD) mechanisms — liveness detection, challenge-response protocols, device-binding — are necessary components of robust age assurance implementations. Constant age checks to log onto social media are effectively guaranteed to kill anyone’s mood, choke traffic, and drive users to alternative platforms and VPNs — and a system that users routinely circumvent provides neither age assurance nor privacy protection.

Surveillance function creep. Digital rights groups fear that mandatory age verification systems, especially those involving biometric or identity document checks, could morph into tools for broader surveillance or profiling, undermining anonymity and user freedoms online. The risk is not hypothetical: a platform with robust age assurance infrastructure has also built an identity verification infrastructure. The governance boundaries between these two functions must be explicitly drawn and technically enforced.

A Risk Management Framework for Age Assurance

For privacy professionals advising organizations implementing or evaluating age assurance systems, the risk management approach should be structured around the following principles:

Proportionality first. The starting point for any age assurance implementation should be the question: what level of assurance is actually required by the use case, and what is the minimum data collection necessary to achieve that level? A gaming platform restricting content to users over thirteen has different proportionality requirements than a financial services platform preventing access to high-risk investment products by minors. Deploying document-based identity verification where behavioral inference or soft estimation would satisfy the regulatory requirement is disproportionate and creates unnecessary privacy exposure.

Privacy by design at the architecture level. The privacy properties of an age assurance system are largely determined at the design stage, not the policy stage. On-device processing, ZKP-based attestations, unlinkable tokens, and immediate deletion of source data after template generation are architectural choices that produce genuinely privacy-protective systems. A privacy policy that says “we don’t retain biometric data” implemented on a cloud-based biometric system is not equivalent to a system that technically cannot retain biometric data because processing never leaves the user’s device. Risk management tools include on-device processing and immediate deletion of source data, separation of processing across third parties, user-binding through passkeys or liveness detection, and tokenization and zero-knowledge proofs to limit data disclosure.

Standards alignment. ISO/IEC 27566-1:2025 and IEEE 2089.1 provide the reference frameworks for assessing age assurance system design. The standard provides a framework for reasoning about age assurance systems — not as a rulebook, but as a structured approach to judgment. Organizations that document their implementation against the ISO standard will be better positioned in regulatory examinations and enforcement proceedings.

Vendor due diligence. While some companies providing age estimation based on face images are willing to participate in NIST’s FATE-AEV program, effectively none provide open-source code or training data that would allow audits by academic researchers or human rights advocates. This opacity makes independent accuracy and bias assessment extremely difficult. Vendor selection for age assurance technology should include requirements for third-party accuracy auditing, demographic fairness testing, data processing documentation meeting Article 28 GDPR requirements, and a contractual prohibition on secondary use of age assurance data.

Accessibility and equity. Age assurance systems that work well for users with government-issued ID, access to a smartphone, and biometric features that perform well in trained models leave out significant user populations. Offering different options allows users to choose entities they trust and offers different avenues for complying with age verification mandates, particularly in circumstances where individuals lack access to formal ID. A compliant age assurance architecture must account for users who cannot or will not use the primary verification method, and alternative pathways must not be so burdensome that they effectively deny access to legal content.

Transparency and user control. Users subject to age assurance have a right to understand what is being assessed, how the determination is made, what data is collected, how long it is retained, and what recourse they have if the system incorrectly determines their age. These disclosure requirements are not merely good practice — they are legally required under GDPR, the UK Online Safety Act, and an expanding range of state frameworks. Age assurance systems that produce age determinations without meaningful transparency about the process are likely to face both regulatory and litigation exposure.

What Comes Next

By the end of this year we will begin to see which parts of the age assurance ecosystem are likely to endure, and which are past their use-by date. The consolidation is already underway. ZKP-based reusable credential systems from Apple, Google, euCONSENT, and k-ID/OpenAge are competing for the architecture standard that will define privacy-preserving age assurance for the next decade. The ISO/IEC 27566-2 technical implementation guidance, once published, will narrow the design space considerably. And regulatory enforcement — from Ofcom, from the FTC, from state attorneys general — will begin to define what “proportionate” age assurance actually means in practice, rather than in policy papers.

For privacy professionals, the strategic imperative is to engage with this domain now, before enforcement crystallizes and organizations are caught making compliance decisions under time pressure. The organizations that will navigate the age assurance landscape successfully are those that treat it as a privacy design challenge rather than a checkbox requirement — building systems that are genuinely privacy-protective not because a regulator has specifically mandated a particular technical approach, but because the irrevocability of identity data exposure demands nothing less.

Age assurance is not going to get simpler. But it is, finally, getting standards and Captain Compliance is here to help your organization comply.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.