EDPS TechSonar Humans as ‘AI Shepherds’ – Steering Autonomy Without Sacrificing Rights

Table of Contents

As artificial intelligence edges deeper into our daily rhythms—from personalized tutors whispering study tips to autonomous agents booking our weekends—the European Data Protection Supervisor (EDPS) is sounding a clarion call. In the foreword to its freshly released TechSonar 2025-2026 report (November 2025), EDPS Wojciech Wiewiórowski paints a vivid picture: We’re no longer just users of AI; we’re evolving into “shepherds of AI agents,” guiding herds of increasingly independent systems while safeguarding human agency and fundamental rights. This isn’t sci-fi speculation—it’s a roadmap for the next two years, spotlighting six interconnected trends that could redefine privacy, accountability, and dignity in a world where algorithms whisper in our ears.

For compliance leaders and tech innovators across the EU and beyond, TechSonar isn’t optional reading; it’s a compliance compass. With the EU AI Act looming large and GDPR enforcement ramping up, Wiewiórowski’s insights urge a pivot: From reactive data fixes to proactive oversight of AI’s “flock.” At Captain Compliance, we’ve pored over the foreword to extract actionable intel—here’s our deep dive, blending the EDPS’s vision with practical strategies to keep your operations rights-resilient.

The Foreword’s Core Vision: From Automation to Human-Machine Symbiosis

Wiewiórowski kicks off with a nod to AI’s explosive footprint: “The rapidly growing presence of AI-powered services in our everyday lives, alongside an expanding variety of innovative products and solutions that traverse diverse domains, encompassing online enterprises, educational institutions and the entertainment industry.” But it’s the double-edged sword that grips—the promise of efficiency clashing with perils to fundamental rights.

At its heart, the foreword flips the script on AI’s origin story. Remember when machines were meant to “automate repetitive tasks to free humans for work that required empathy and connection”? Fast-forward to 2025: Agentic AI and its kin are blurring those lines, fostering “cognitive or emotional engagement” that questions our independence and trust. The metaphor? Humans as shepherds—stepping back from the grind to oversee AI actions, ensuring they “align with human values.” It’s poetic, but pointed: Without vigilant guidance, autonomy could erode our agency—the power to choose, control, and own our decisions.

This year’s TechSonar zeros in on AI (with confidential computing as the outlier), underscoring how these tools aren’t silos; they’re a web reshaping “how we work, learn and relate to technology.” The EDPS pledges ongoing monitoring, but the onus? On us—to fuse innovation with “thoughtful governance and collaboration across disciplines.”

Unpacking the Six Trends: AI’s Interconnected Frontier

Wiewiórowski doesn’t just flag risks; he spotlights synergies. The report’s six trends form a tapestry, each thread amplifying the others. Here’s a granular look, with compliance hooks:

1. Agentic AI: The Autonomous Backbone

These are AI systems “that can autonomously make decisions, take actions and achieve goals without constant human intervention.” Think virtual assistants that don’t just suggest— they book, negotiate, and adapt on the fly. Underpinning coding assistants and companions, agentic AI enables “cooperation to achieve a common goal,” like a swarm of bots tackling supply chain hiccups.

Risk radar: Without shepherding, it could cascade errors or biases unchecked. Compliance angle: Embed DPIAs (data protection impact assessments) for every autonomous loop, per EU AI Act high-risk mandates.

2. AI Companions: Emotional Entanglements

From chatty wellness coaches to virtual confidants, these “interact with and support humans through personalised experiences.” They’re the empathetic face of AI, but Wiewiórowski warns: They blur “automation and human connection,” risking over-reliance that chips at emotional independence.

Deep dive: In entertainment or therapy apps, companions mine sentiment data for “deeper human-AI system interactions.” Tip: Layer explicit consents for emotional profiling—GDPR’s “special category” data demands it.

3. Automated Proctoring: Surveillance in the Classroom

Online exams get a watchful eye: Systems that “monitor online exams to detect cheating” via webcams and behavior analytics. Efficiency win for edtech, but at what cost? “Challenges in terms of transparency, fairness and proportionality of data use” loom large, especially for marginalized students flagged by biased models.

Insight: Proctoring’s dual nature mirrors AI’s broader tension—integrity vs. intrusion. Action: Audit for proportionality; anonymize video feeds where feasible, aligning with ePrivacy Directive updates.

4. AI-Driven Personalised Learning: Tailored, But at a Price

Customizing “content and learning experience to each student’s needs,” these tools promise equity but deliver data feasts. Integrated with agentic backends, they adapt in real-time, raising agency flags: Does the algo “dictate” paths, stifling exploration?

Compliance cue: Schools and platforms, treat learner profiles as sensitive—opt-in for adaptations, with audit trails for bias checks under proposed AI guidelines.

5. Coding Assistants: Productivity’s Double Agent

Dev tools that “help developers write and debug code,” turbocharged by agentic smarts. They “transform the way software is conceived and maintained,” but embed flaws if trained on unchecked data.

Pro tip: For software houses, scan outputs for privacy leaks—tools like GitHub Copilot need wrappers to flag PII in code suggestions.

6. Confidential Computing: The Trust Anchor

The non-AI outlier: “Protects data while it is being used by performing computations in secure, isolated environments.” As AI hungers for “sensitive personal and contextual data,” this tech is “central to ensuring privacy, accountability and compliance.”

Why it matters: It ties the flock together—enabling secure agentic swarms without exposing the flock. Adopt now: Integrate into cloud stacks for GDPR-compliant processing.

Risks on the Horizon: Autonomy vs. Agency

Wiewiórowski doesn’t sugarcoat: AI’s “increase in autonomy” threatens a “reduction of human agency,” where algorithms “dictate or constrain” choices. Interconnections amplify this—agentic proctoring could unfairly boot students, companions erode trust in real bonds, and learning algos pigeonhole potential. Broader hits? Privacy erosion via data silos, accountability black holes in multi-agent teams, and dignity dips from over-surveillance.

The foreword’s gut punch: “What was once seen as a tool for efficiency has become a platform for cognitive or emotional engagement.” Without checks, we’re herding toward a world where AI shepherds us.

EDPS Recommendations: From Dialogue to Discipline

No mandates here—just a blueprint: “Targeted research and technical innovation,” plus “an uncompromising commitment to supporting human values.” Policymakers? Foster “thoughtful governance” via the AI Act’s sandboxes. Organizations? Prioritize transparency in agentic designs and proportionality in proctoring.

Call to action: The EDPS will “continue to monitor these developments… fostering a dialogue that keeps privacy, accountability and human dignity at the heart of digital transformation.” Your move: Join the convo—engage via EDPS consultations or industry forums.

Business Playbook: Shepherding AI in Your Ops

For EU-facing firms—from edtech startups to enterprise coders—this is your 2025-2026 audit list. We’ve distilled it into steps:

  • Trend-Map Your Stack: Inventory tools—agentic? Companion-like? Run a TechSonar gap analysis.
  • Agency Audits: Stress-test for over-autonomy; ensure human vetoes in high-stakes decisions.
  • Confidentiality First: Pilot secure enclaves for data-heavy AI; certify under ISO 27001.
  • Rights-by-Design: Bake DPIAs into dev cycles; granular consents for personalized features.
  • Cross-Disciplinary Huddles: Loop legal, ethics, and tech quarterly—Wiewiórowski’s collaboration in action.
  • Monitor and Adapt: Quarterly reviews; subscribe to EDPS alerts for trend shifts.

Pro edge: Tools like our AI Rights Toolkit automate DPIAs, slashing compliance time by 50%.

Charting the Flock: A Forward Path

TechSonar 2025-2026 isn’t a warning—it’s an invitation to lead the herd. As Wiewiórowski closes: Balance autonomy with agency, or risk a digital dystopia. For Captain Compliance clients, this means turning EDPS wisdom into workflows that innovate without infringing.

Ready to shepherd your AI? Our experts craft custom TechSonar roadmaps and data governance software solutions—book a free subject rights audit today and ensure your flock flies true.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.