It started as a weekend project. Sammy Azdoufal, an AI strategy lead at a vacation rental company based in Spain, had recently purchased a DJI Romo robot vacuum. He wanted to do something that any self-respecting tech enthusiast would find irresistible: control it with a PlayStation 5 DualSense controller. Not because it was necessary. Not because it made the vacuum clean better. Simply because it would be, as he put it, cool.
What happened next was not cool. Within nine minutes of sitting down with a Verge reporter to demonstrate his creation, approximately 6,700 vacuums across 24 countries reportedly responded to him as their operator. He had not hacked DJI’s servers. He had not bypassed any authentication system. He had not used stolen credentials. All that he did was get the private token of his own Romo vacuum. The token that was supposed to authenticate only his device, on his network, to his account, turned out to be a master key to nearly seven thousand strangers’ homes.
Azdoufal claimed he could view live video feeds, listen through onboard microphones, monitor cleaning activity and produce detailed floor plans of private residences. Device IP addresses reportedly revealed approximate locations. In nine minutes, using a tool he built himself to play with his own appliance, an ordinary person had accidentally assembled the most intimate surveillance capability imaginable — live audio and video from inside people’s homes, the exact layout of their living spaces, and their approximate geographic location — across two dozen countries simultaneously.
He did not exploit it. He immediately reported it. But the question that should concern every privacy professional, every IoT product manager, and every consumer who has ever brought a camera-equipped smart device into their home is not what Azdoufal did with the access. It is how the access was possible in the first place — and what it reveals about the security architecture, regulatory framework, and corporate accountability standards governing an entire category of devices that millions of people have invited into the most private spaces of their lives.
The Technical Failure: Broken Object Level Authorization at Scale
Azdoufal used Claude Code to reverse engineer the protocol used by the DJI Romo to communicate with its servers. This is the first detail that cybersecurity professionals will find significant, and not because AI coding tools are inherently dangerous. The significance is what the reverse engineering revealed: a backend that was not performing proper permission validation between user tokens and device assignments.
The vulnerability class here is what security researchers call Broken Object Level Authorization — BOLA, or sometimes IDOR, Insecure Direct Object Reference. It is consistently ranked as one of the most common and most critical API security failures. The principle is straightforward: a system that authenticates a user correctly but then fails to verify that the authenticated user is actually authorized to access a specific resource has a BOLA vulnerability. In DJI’s case, the system authenticated Azdoufal’s token — confirming he was a legitimate DJI Romo user — and then, catastrophically, returned not just his device but every device on the platform.
DJI told Popular Science it had identified the vulnerability through an internal review in late January and began remediation immediately. “The issue was addressed through two updates, with an initial patch deployed on Feb. 8 and a follow-up update completed on Feb. 10,” the company said, adding that the fix was deployed automatically and required no user action.
That timeline is itself a compliance problem. DJI claims to have identified the vulnerability internally in late January. Azdoufal’s demonstration with The Verge took place on February 10. The gap between internal discovery and complete remediation — during which the vulnerability remained exploitable — is a disclosure and incident response question that data protection authorities in every jurisdiction where DJI Romo devices operate will be entitled to examine.
A company spokesperson told The Verge the flaw had been fixed — a statement that arrived about 30 minutes before Azdoufal demonstrated that thousands of robots, including the journalist’s own review unit, were still reporting in live. DJI later issued a fuller statement acknowledging a backend permission validation issue and two patches, deployed on February 8 and 10.
A company that tells a journalist its security flaw has been fixed thirty minutes before the journalist’s own review unit demonstrates the flaw is still active is not a company with a mature incident response program. It is a company that issued a press statement before completing a remediation it claimed was complete. That gap between what DJI said and what was demonstrably true during a live journalist interview is the kind of detail that regulatory investigations are built around.
Azdoufal has indicated that additional security concerns remain unaddressed. Among these outstanding issues is the ability to stream video feeds from DJI Romo devices without requiring a security PIN. Another problem of significant severity has been identified but not publicly disclosed. The primary vulnerability may be patched. The security architecture that produced it apparently has not been redesigned.
What Was Actually Exposed: The Privacy Inventory
For privacy professionals conducting risk assessments on smart home devices, the data categories exposed by this vulnerability deserve explicit enumeration, because the cumulative profile they create is more alarming than any individual element in isolation.
The flaw meant Azdoufal could view live camera feeds, activate microphones, check battery levels, generate 2D floor plans of homes and determine approximate locations through IP addresses. Running through each of these:
Live camera feeds from inside private residences are special category data in every meaningful sense — they reveal the physical appearance of the home’s occupants, their daily routines, who visits them, whether they are home or away, and potentially far more sensitive content depending on where the device was operating when accessed. A robot vacuum that cleans a bedroom, bathroom, or home office is a camera that has visited every room in the house.
Active microphone access from inside a private home is, under multiple legal frameworks, the functional equivalent of a wiretap. CIPA in California, the Electronic Communications Privacy Act at the federal level, and equivalent frameworks across the EU treat the unauthorized interception of private communications as a serious legal violation. A vulnerability that enables live microphone access to 6,700 homes is not a security incident in a technical sense only — it is a potential mass wiretapping event, regardless of whether anyone actually listened.
Detailed floor plans of private residences are data that most people have never shared with any company and would never knowingly share. Floor plans reveal the layout of security vulnerabilities — entry points, blind spots, room configurations — in ways that have obvious implications for physical security. They also reveal intimate details of how people live: whether a home has a nursery, a medical equipment room, or adaptive accessibility features. This is precisely the category of inference data that privacy law is struggling to keep pace with.
Approximate location data via IP addresses, combined with floor plan data, creates a profile that is meaningful for physical security purposes in ways that abstract location data alone is not. Knowing that a specific floor plan belongs to a specific IP address in a specific neighbourhood narrows the re-identification problem to a tractable scope.
The access extended beyond vacuums. Even DJI Power portable battery stations were showing up and reporting diagnostics. The surface area of the exposure was not limited to the advertised product category — it encompassed the broader ecosystem of DJI-connected devices in each household, which in some cases extended to power management infrastructure.
The DJI Problem: National Security Dimensions
The DJI Romo incident does not exist in isolation. It lands at a moment of acute regulatory scrutiny of DJI’s data practices that predates and contextualises the vacuum vulnerability.
The FCC placed the Chinese drone maker on its Covered List in December 2025 after a classified interagency review determined foreign-made drones posed unacceptable dangers. Republican Florida Sen. Rick Scott has sought to retroactively revoke all DJI FCC authorizations granted after Dec. 23, 2024. DJI filed a lawsuit challenging the FCC decision on February 20 — the same week the vacuum vulnerability was being publicly reported.
The Covered List designation reflects a specific concern: that devices manufactured by Chinese companies and connected to cloud infrastructure may create data access pathways for Chinese government entities, regardless of where the data is physically stored. Security researcher Kevin Finisterre told The Verge that housing data on American servers offers no protection against access by DJI’s Chinese workforce. This is not a hypothetical concern — it is the operational logic behind the FCC’s classification decision, and it applies to the Romo vacuum’s cloud infrastructure with the same force it applies to DJI’s drones.
The floor plans, live video, and microphone access that Azdoufal could access from his PS5 controller were accessible because they existed in DJI’s cloud infrastructure. They were not stored locally on the device; they were transmitted to and retrievable from DJI’s servers. That architecture — cloud-dependent, always-on, accessible remotely — is the same architecture that the FCC’s Covered List designation is designed to scrutinise.
South Korea’s Consumer Agency tested six robot vacuums in 2025 and found critical weaknesses in three Chinese models. The agency found that Dreame’s X50 Ultra allowed hackers to remotely activate its camera while Narwal and Ecovacs units lacked proper authentication, exposing photos captured during cleaning sessions to outside parties. Samsung and LG devices earned higher marks. The DJI Romo vulnerability is not an isolated incident affecting a single product from a single manufacturer. It is the most dramatic recent data point in a pattern of systemic security inadequacy in Chinese-manufactured smart home devices that regulatory agencies in multiple countries have independently documented.
The AI Coding Tools Dimension: A New Threat Surface
There is a second privacy and security story embedded in the Romo incident that has received less attention than the vacuum vulnerability itself but may ultimately be more consequential for the long-term IoT security landscape.
AI-powered coding tools, which make it easier for people with less technical knowledge to exploit software flaws, potentially risk amplifying those worries even further. Azdoufal is an AI strategist, not a security researcher. He did not approach the DJI Romo with malicious intent or sophisticated hacking tools. He used Claude Code to reverse engineer the device’s communication protocol — a task that, five years ago, would have required specialised expertise in network analysis, API reverse engineering, and protocol decoding. Today, it is a weekend project.
The democratisation of security research capability that AI coding tools represent is not inherently problematic — responsible disclosure, as Azdoufal practised, is the reason this vulnerability is now patched rather than silently exploited. But the same capability that enabled Azdoufal to find and report this flaw would enable a malicious actor to find and exploit the next one. Cybersecurity experts say the incident is a warning sign for the entire smart-home industry. AI coding tools are lowering the bar for advanced security probing, significantly enlarging the population of people capable of testing IoT protocols — further eroding any faith in security through obscurity.
Security through obscurity — the implicit assumption that most consumers lack the technical capability to probe device security, so hiding the protocol is equivalent to securing it — has always been a fragile foundation. The AI coding tool ecosystem has now made it functionally worthless. Any device whose security depends on the difficulty of reverse engineering its communication protocol should be considered insecure by design in 2026.
The Broader Pattern: Robot Vacuums as Surveillance Infrastructure
The DJI Romo incident is not the first time a robot vacuum has been turned into an unwitting surveillance tool, and the pattern of incidents is important context for understanding why this vulnerability class keeps reappearing.
In 2024, hackers commandeered Ecovacs Deebot X2 vacuums across U.S. cities, using them to shout racial slurs at their owners and chase their pets — a dramatic demonstration of what remote control of a home robot actually looks like in practice. The Ecovacs incident was not a subtle data exfiltration. It was overt, aggressive, and deeply disturbing for the households affected. It received significant press coverage, generated regulatory attention, and prompted Ecovacs to issue security patches. It did not prevent the South Korean consumer agency from finding critical weaknesses in Chinese vacuum models the following year.
The iLife A11 incident that preceded both — where an engineer discovered his robot vacuum was consistently sending logs and telemetry data back to the manufacturer, and when he blocked that transmission, the company remotely bricked the device — illustrates the other dimension of the surveillance problem. The DJI Romo vulnerability exposed data to a third party through a security flaw. The iLife incident revealed data collection that was not a flaw at all — it was the intended operation of the device, and the company treated user interference with that data collection as a terms of service violation warranting device destruction.
These incidents collectively describe an industry where cloud-dependent architecture is the default, security validation is inadequate, data collection is extensive and poorly disclosed, and corporate responses to security failures prioritise reputation management over transparency. That pattern is not a DJI problem or a Chinese manufacturer problem specifically — it is an IoT industry problem that happens to be most dramatically illustrated by the products most thoroughly documented to have it.
The Regulatory Gap: What Law Currently Says About Your Vacuum’s Camera
For privacy professionals, the most practically urgent question raised by the Romo incident is what legal obligations DJI had — and what obligations it may have violated — under applicable data protection frameworks.
Under the GDPR, which governs DJI’s European operations, the Romo’s collection and processing of audio and video data from inside private residences is the processing of personal data, and potentially special category data depending on what the camera captures. The backend permission validation failure that allowed 6,700 users’ data to be accessible to an unauthorised party is a personal data breach under Article 4(12) — “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.” Article 33 requires notification to the relevant supervisory authority within 72 hours of becoming aware of the breach where it is likely to result in a risk to individuals’ rights and freedoms.
DJI claims to have identified the vulnerability in late January. The patching timeline spans February 8 to 10. The supervisory authority notification question — whether DJI notified the relevant EU data protection authorities within 72 hours of becoming aware of a breach affecting an unknown number of European residents’ home video and audio data — is one that the EDPB, or individual national DPAs, are entitled to pursue.
Under California law, the CCPA’s definition of personal information explicitly includes geolocation data and audio and video recordings. The collection of live video and audio from inside California residents’ homes, combined with floor plan data that constitutes a unique identifier of each residence, creates the full suite of CCPA obligations including privacy notice, security requirements, and breach notification. The California Attorney General’s office, which has been active in IoT privacy enforcement, has both the authority and the factual basis to examine DJI’s compliance.
The FTC’s Section 5 authority over unfair or deceptive practices applies to IoT security failures where the gap between a company’s security representations and its actual security practices constitutes deception. DJI markets the Romo as a premium home device with cloud connectivity. Marketing premium home devices with cameras and microphones to consumers without disclosing that a stranger with a PlayStation controller could access those feeds across 24 countries is a representation gap with Section 5 implications, regardless of whether DJI made explicit security promises.
What This Means for Every Smart Home Device in Your Life
The DJI Romo vulnerability is the most vivid recent illustration of a privacy risk that exists, in varying degrees of severity, in every internet-connected device with a camera, microphone, or location sensor that you have brought into your home. The risk is not hypothetical. It is not the kind of privacy concern that requires an adversarial nation-state actor or a sophisticated criminal organisation to materialise. It requires one person with a weekend project, an AI coding tool, and a PlayStation controller.
The practical guidance for consumers is uncomfortable but honest: every cloud-dependent smart home device with a camera or microphone is an endpoint in someone else’s infrastructure. The security of that endpoint is determined by the manufacturer’s backend architecture, not by anything you control. A DJI Romo owner who enabled two-factor authentication, used a strong password, kept their app updated, and read the privacy policy was no more protected against this vulnerability than someone who did none of those things, because the vulnerability was in DJI’s backend, not in the user’s behaviour.
For organisations advising on IoT security and privacy compliance, the Romo incident confirms that the security-through-obscurity era for smart home devices is definitively over. AI coding tools have permanently lowered the barrier to protocol reverse engineering. Any device whose security depends on the complexity of its API being a practical obstacle to unauthorised access should be considered insecure today, not eventually. Privacy impact assessments for IoT products need to explicitly model the threat scenario where a motivated but technically ordinary consumer, using publicly available AI tools, extracts authentication tokens and probes the backend for authorisation failures.
The man with the PlayStation controller wanted to drive his vacuum around his flat. He accidentally demonstrated that 6,700 families had installed, in their most private spaces, a surveillance device they did not know they owned. The vacuums are patched now. The architecture that made them surveillance devices is still there, in every cloud-connected robot in every home that bought one — waiting for the next person who just wants to try something cool on a Saturday afternoon.