As artificial intelligence moves from experimentation to day-to-day operations, a familiar pattern has emerged inside organizations: the technology evolves quickly, while the stories people tell themselves about risk evolve slowly. In a recent piece for the AEPD Laboratory (the innovation and analysis space of Spain’s data protection authority, the Agencia Española de Protección de Datos), Davara & Davara Partner Laura Davara put a clear spotlight on three misconceptions that repeatedly show up in boardrooms, product teams, and procurement checklists. Her message is straightforward: believing a myth does not exempt anyone from data protection obligations.
These misconceptions are not always malicious. Often they are the product of optimism, a desire to move fast, or the mistaken assumption that “AI” is a category so new it sits outside established privacy rules. But AI systems do not operate in a regulatory vacuum. If personal data is involved, the familiar principles still apply: purpose limitation, data minimization, transparency, accuracy, security, and accountability. The only difference is that AI can amplify both the benefits and the consequences of getting those principles wrong.
Below are the three myths Davara calls out, along with the practical reality behind each one and what organizations should do if they want privacy and AI to coexist in a way that is sustainable, defensible, and trustworthy.
The myths and what’s actually true
- Myth: “If we delete chatbot conversations, we’ve solved the privacy problem.”
This is one of the most common comfort blankets in AI deployments: the idea that deleting prompts and outputs is the privacy equivalent of “resetting” the system. In practice, deletion is rarely that clean. AI tools may log conversations for security, quality, debugging, or service improvement. Copies may persist in backups, telemetry systems, vendor support workflows, or monitoring tools. Even when a vendor promises deletion, the organization still must understand what was collected, why it was collected, where it flowed, how long it persists, and what rights users have in relation to it.
More importantly, the privacy risk is not limited to the transcript itself. A single chat exchange can contain sensitive personal data, confidential business information, or identifiers that make the individual easily traceable. Deleting a record after the fact does not undo an unlawful collection, a lack of transparency, or a purpose that was never properly defined. Deletion is a control, not a legal basis.
Reality check: If your organization permits AI chat tools in customer support, HR, legal, engineering, or sales, you need governance that addresses the entire lifecycle of the interaction: what users can input, what the system can output, what is stored, what is shared, and what the vendor does with that content.
- Myth: “Data quality is the vendor’s problem, not ours.”
AI discussions often treat data quality like a technical footnote, something a model provider will “handle” through training and tuning. Davara’s point lands because it ties quality directly to privacy obligations and organizational accountability. Poor data quality is not merely a performance issue; it can be a compliance issue. Inaccurate, incomplete, outdated, or biased data can produce incorrect inferences, unfair outcomes, and decisions that are difficult to explain or challenge. That is a privacy and governance problem as much as it is an AI problem.
The deeper issue is design choice. AI systems do not simply “happen” to use personal data. Someone decides what to collect, what to label, what to exclude, what features to engineer, how the system will be evaluated, and what will be optimized. Those choices determine whether the system respects minimization, supports accuracy, and maintains a defensible purpose. If an organization chooses convenience over careful design, it inherits the consequences.
Reality check: You cannot outsource accountability. Even when a vendor supplies the model, the deployer controls the context: the use case, the input channels, the prompts, the permissions, the integrations, and the business process that turns AI output into action.
- Myth: “AI is so autonomous that it determines outcomes, not us.”
There is a persistent narrative that AI “takes over,” that it acts like an independent decision-maker with its own momentum. Davara’s framing is more grounded: the influence of AI on people’s lives is directly tied to how much an organization chooses to use it. If a model merely drafts internal summaries, the risk profile is different from a system that scores individuals, flags suspicious behavior, prioritizes job candidates, determines eligibility, or nudges users toward certain actions.
Organizations decide the level of automation, the degree of reliance on outputs, and whether there is meaningful human oversight. They decide whether the AI recommendation is advisory or decisive. They decide whether users can contest outcomes and whether there are guardrails to prevent the system from drifting into uses it was never designed to support.
Reality check: “The model did it” is not a serious governance posture. Deployers choose how consequential the system becomes, and regulators will look to those choices when evaluating accountability.
What these myths reveal about AI governance
When you line these misconceptions up, a pattern appears: each one tries to relocate responsibility somewhere else. Deletion becomes a magic eraser. Data quality becomes a vendor feature. Autonomy becomes an excuse. But AI deployments are built from human decisions, and those decisions sit squarely within established privacy frameworks.
That does not mean AI is incompatible with privacy. It means privacy needs to be present early, not bolted on after adoption. Strong privacy practices make AI systems more predictable and more trustworthy. They also help organizations avoid the expensive cycle of deploying fast, then scrambling to redesign under pressure once a complaint, incident, or audit arrives.
How to translate “privacy and AI must be united” into daily operations
Most teams do not need more theory; they need workable operating habits. If your organization is deploying chatbots, copilots, classifiers, recommenders, or decision-support tools, the practical goal is to put a small set of repeatable controls in place that scale across use cases.
- Define permitted inputs: Be explicit about what users, employees, and contractors are allowed to paste into AI tools. If sensitive or confidential data should not be used, put that rule in policy and reinforce it in UI prompts and training.
- Separate experimentation from production: A sandbox can be useful, but production-grade AI requires stronger access controls, logging, monitoring, and vendor terms.
- Map data flows end-to-end: Know what is collected, where it is stored, which systems receive it, and whether it is used for model improvement, analytics, or security monitoring.
- Set retention rules that match the purpose: Retain only what you need, for as long as you need it, and ensure deletion actually propagates through backups and downstream systems where feasible.
- Validate outputs for accuracy and bias: Build lightweight testing into deployment: sampling, error analysis, and review paths when outputs could materially affect people.
- Keep humans meaningfully in the loop when stakes are high: If the tool influences hiring, access, pricing, eligibility, health, or legal outcomes, design oversight that is real, not ceremonial.
- Make transparency usable: Privacy notices and in-product explanations should reflect the real experience: what data is used, what the system does, and what choices users have.
- Contract for control: Vendor terms should address training on your data, confidentiality, security, breach notification, sub-processors, and support access to logs and evidence when issues arise.
The practical takeaway
The most valuable part of Davara’s argument is that it refuses to treat AI as exceptional in the wrong way. AI is technically novel, but the obligations are familiar. If personal data is in the loop, privacy requirements do not evaporate because the system is probabilistic, because outputs feel ephemeral, or because a vendor claims to handle the hard parts. What changes is the scale and speed with which mistakes can propagate.
When organizations embrace that reality, they can choose AI designs that are both useful and defensible. They can decide what data is necessary, what automation level is appropriate, and what safeguards are proportionate to the consequences. They can build systems that respect people not only because the law requires it, but because trust is becoming a competitive differentiator in every data-driven market.
And that is why the closing message matters: privacy and AI do not belong in separate silos. They must be united in how products are designed, how operations are run, and how everyday decisions get made—before the next model rollout turns a manageable risk into an avoidable incident.