When Your AI Agent Has Your Credit Card: FIDO, Google, and Mastercard Quietly Begin Building the Rules for Agentic Commerce

Table of Contents

The FIDO Alliance announcement reported by Wired this week reads, on the surface, like a niche piece of payments-industry news: the standards body behind passkeys is partnering with Google and Mastercard to develop technical standards that prevent AI agents from making unauthorized credit card purchases on behalf of consumers. The framing in the trade press has been “guardrails for shopping bots.”

It is something more significant than that. What FIDO is starting to build is the authentication and authorization architecture for agentic commerce — the emerging category of transactions in which an AI agent acts on a human’s behalf, often autonomously, sometimes across multiple parties, and almost always faster than the human can review. The existing payment-authentication stack — passkeys, biometrics, 3D Secure, PSD2 strong customer authentication, the whole apparatus refined over twenty years of fraud arms races — was built on a foundational assumption that a human is at the keyboard. Agentic commerce breaks that assumption. The standards work now beginning will define the default rules for trillions of dollars of commerce over the next decade.

For privacy, compliance, payments, and risk teams, this is a development worth understanding properly. Below is what FIDO is actually trying to solve, why the existing model breaks, the governance and privacy questions the headline coverage tends to underplay, and the practical posture in-house teams should adopt now — well before formal standards are finalized.

Why the Existing Authentication Model Breaks

FIDO Alliance CEO Andrew Shikiar’s quote in the Wired piece captured the structural problem: “preexisting models aren’t necessarily designed for this sort of paradigm — they weren’t built to contemplate actions performed on a user’s behalf.”

That is technically true and substantively understated. The entire architecture of consumer authentication assumes a human-presence model. Passkeys bind a credential to a device a human possesses, authorized by a biometric or PIN the human supplies in real time. WebAuthn requires a “user verification” step. PSD2 strong customer authentication in the EU mandates two factors from independent categories — knowledge, possession, inherence — and explicit dynamic linking between the authentication and the specific transaction. 3D Secure works by inserting a human-readable challenge into the payment flow.

When an AI agent transacts, every one of those mechanisms has to be either bypassed or impersonated. The agent does not have a fingerprint. It does not have a registered device in the traditional sense. It cannot reliably read a 3D Secure challenge displayed in a browser the human is not watching. The agent is a software process running on a server somewhere, holding a credential that was issued to a human, executing transactions that the human may or may not have specifically approved.

There are three families of failure mode worth distinguishing.

The “agent acts beyond authorization” failure. The user told the agent to “buy a coffee maker,” and the agent purchased a $400 espresso machine instead of a $40 drip coffee maker, or bought five of them, or bought one from a merchant the user would never have chosen. Not malicious; just wrong. Today’s authentication stack has nothing to say about this, because authentication confirmed the credential, not the intent.

The “agent gets tricked” failure. A malicious website, prompt-injection attack, or compromised merchant feed convinces the agent to authorize a transaction the user did not intend. The agent is functionally a confused deputy — it has the user’s authority, and it has been manipulated into using that authority against the user’s interest. The credential was valid. The authentication was valid. The transaction was disastrous.

The “agent is impersonated” failure. A bad actor either steals an agent’s credentials or stands up their own agent that behaves indistinguishably from the user’s legitimate one. From the merchant’s and issuer’s perspective, the transaction looks normal. The chargeback queue grows.

The existing payment-authentication infrastructure cannot reliably distinguish among these failure modes after the fact, and cannot reliably prevent them in advance. That is the gap FIDO is moving to close.

What the Standards Will Likely Include

FIDO has not published the technical specifications yet, and the work will probably take 12 to 24 months to produce something stable. But the architectural shape is reasonably predictable from the problem definition and from the early work already visible from Mastercard’s “Agent Pay” initiative, Visa’s parallel agentic-commerce program, and the broader authentication-research community.

Expect the standards to address at least four primitives.

Agent identity. A cryptographically verifiable identifier for the agent itself, distinct from the user’s credential. This is the equivalent of separating “who is the principal” (the user) from “who is the actor” (the agent acting on the user’s behalf). Today, payment systems collapse those two roles. Tomorrow, they will need to keep them distinct so that an agent compromise does not equal a user compromise.

Scoped delegation. A mechanism for the user to authorize an agent to act on their behalf within explicit limits — merchants, dollar amounts, categories, time windows, geographic constraints, anything else the system deems contractually relevant. This is the OAuth-style scope concept applied to payments. It is also the place where most of the policy work is going to happen, because “scope” in commerce is significantly messier than scope in API access.

Intent verification. A way to confirm, at transaction time, that the action the agent is taking matches the action the user authorized. This is the genuinely hard problem. Some early proposals involve user-side review of high-value transactions, asynchronous confirmation flows, or AI-generated summaries that the user signs off on. Each has tradeoffs between friction, security, and the entire reason the user delegated to an agent in the first place.

Audit and dispute infrastructure. A standardized record of which agent took which action, under what scope, on whose behalf, against what merchant, for what amount, with what intent confirmation. This is what makes chargebacks, regulatory investigations, and consumer complaints tractable in an agentic world. Without it, the dispute process collapses into a he-said-she-said between users, agent providers, merchants, and issuers.

The FIDO/Google/Mastercard work will likely focus on the cryptographic and authentication layers (agent identity, scoped delegation, intent verification primitives). The audit infrastructure, the merchant-side acceptance rules, and the issuer-side risk policies will land downstream. EMVCo, the major card networks, the regulators, and the major payment processors will all have their own role.

The Privacy and Compliance Dimensions Most Coverage Misses

The payments-industry framing of this story is mostly about fraud and authorization. The privacy and compliance dimensions are less covered and equally important.

Where does the data flow when an agent transacts? Every transaction performed by an agent involves at least three new data flows compared to a direct human transaction: the prompt or instruction the user gave the agent, the agent’s reasoning trace and tool-call history, and the agent provider’s logs of the entire interaction. Each of those is personal information of the user. Each of them is now sitting in the agent provider’s infrastructure (Anthropic, OpenAI, Google, Microsoft, or wherever) on top of whatever the merchant and issuer normally retain. The privacy notice the user agreed to with their bank does not contemplate any of this. The privacy notice with the agent provider does, but is unlikely to be specific. Whose obligations apply, and who is the controller for what, is genuinely unsettled.

What is the lawful basis under GDPR and similar regimes? Where the user is in the EU/UK, an agent transacting on their behalf is processing their personal data — and the personal data of every counterparty involved. Article 6 lawful basis is straightforward for the merchant transaction itself; it is much messier for the side flows (training data, model improvement, debugging logs). Article 22 — the right not to be subject to solely automated decision-making with legal effect — is going to matter a lot here, because many agent transactions are precisely automated decisions with legal/financial effect.

Who is the controller, and who is the processor? When a user instructs an agent to buy something, the agent provider is processing personal data on the user’s behalf. That is at first glance a processor relationship. But the agent provider is also exercising independent judgment about which merchants to use, what data to collect, and how to handle errors — which looks more like a controller role under both GDPR and the CCPA’s modified controller/business analysis. The case for joint controllership in some scenarios is real. The contractual architecture for that has not yet been written for the agent provider–user–merchant triangle.

What about the merchant’s obligations? A merchant that accepts agent-driven traffic is, at a minimum, processing personal data of the user via the agent. They may also be processing data about the agent itself, the agent provider, and the chain of custody. PCI DSS, the GDPR, the CCPA, and PSD2 all impose specific merchant duties. None of them are written with agent-mediated transactions in mind. Expect contractual chaos for the next two to three years.

The training data question. When an agent transacts on a user’s behalf, the resulting interaction trace — successful and failed alike — is exactly the kind of data agent providers will want to use to improve their systems. Whether the user has meaningfully consented to that, whether the resulting use crosses the line into Article 22 automated decision-making territory, and whether the user’s payment information ends up in any training corpus are all live questions. The Talkspace case earlier this week is a useful reminder of how badly these questions can age when the data later becomes interesting to lawyers.

Liability and Dispute Resolution: The Real Battleground

Underneath the technical-standards work, the question that will actually determine how agentic commerce evolves is liability allocation. When a user’s AI agent makes a $5,000 purchase the user did not want, who eats the loss?

The candidates are:

  • The user (“you authorized the agent, you are responsible”) — the simplest legal answer, and the one issuers will reach for. Likely unsustainable politically as agentic commerce hits scale.
  • The agent provider (“you built and operate the agent that misbehaved”) — the answer most aligned with strict-liability logic, the one consumer advocates will favor, and the one agent providers will fight.
  • The merchant (“you accepted the transaction without verifying intent”) — the traditional 3D Secure liability shift logic, applied to agents. Workable for merchants who have integrated the new standards, painful for those who haven’t.
  • The issuer (“you have the existing chargeback infrastructure, you absorb it”) — the path of least resistance, but the one most likely to push issuers toward conservative agent-acceptance policies that strangle the ecosystem.

Most likely answer: a hybrid, with the standards work creating a “safe harbor” for parties that adopt the new authentication primitives, and residual liability flowing to whichever party in the chain failed to implement them. That mirrors how 3D Secure liability shifted, how PCI DSS has worked, and how every prior major payment-authentication change has played out. It is also why the standards work matters so much — the standards define the safe harbor.

What In-House Teams Should Do Now

The FIDO standards are 12 to 24 months from operational maturity. The window for compliance teams to think about agentic commerce is now, before the standards are finalized and before the operational practices harden into vendor lock-in.

A short list, in priority order.

Catalog where your business is already exposed to agent-driven traffic. Every consumer-facing site is now receiving some volume of automated traffic that includes AI agents acting on user behalf. Most of it is currently invisible. Start by asking your fraud, ad-ops, and analytics teams what they’re seeing. Many will have data they did not realize was relevant.

Map your current authentication and acceptance posture. What does your site do when an agent attempts a purchase? Most merchants today either silently accept it (if it looks normal), block it via bot-management tools (often catching legitimate agents along with bots), or fail it with cryptic errors. None of those are good positions for the next 24 months.

Get your privacy and contracts teams talking to your payments team. The agentic commerce problem sits at the intersection of three functions that often do not coordinate. The contractual, controller/processor, and lawful-basis questions cannot be answered by any one of them alone.

Think through your liability posture. Who eats the loss in the failure modes above, under your current contracts and your current insurance? If the answer is “we have not modeled this,” that is the answer to start with.

Watch the standards process. FIDO, EMVCo, the major card networks, the EU institutions, and the major issuer associations are all going to publish position papers, draft standards, and consultation documents over the next 12 to 24 months. The compliance teams that participate, even as readers and commenters, will be 12 months ahead of the ones that wait for final rules.

Update DPIAs and AI governance documentation. If your business is deploying any agentic capability — for customer service, sales, marketing, internal operations — the data-flow analysis for that capability should reflect the questions above, even where formal regulatory answers do not yet exist. Documenting the analysis now creates the artifact you will be glad to have when the regulatory questions land.

FIDO Security

The FIDO/Google/Mastercard announcement is not a small payments-industry update. It is the first visible piece of architectural work on what will become a load-bearing layer of consumer commerce — the rules for AI agents transacting on humans’ behalf. The technical work will take time. The privacy, compliance, and liability questions that surround the technical work will take longer. The businesses that begin engaging with both now will define the operational practices the rest of the market eventually adopts.

The temptation, for in-house teams, is to treat agentic commerce as a problem for 2027 or 2028. The temptation is wrong. The standards being written now will shape the contracts, the audit obligations, the regulatory posture, and the liability allocation that bind those teams for the decade after that. The right time to read, comment, and prepare is the time before the standards are final.

That time is now.

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.