As we warned in two different privacy and security alerts over the weekend around AI agents and the new quirky platform called Moltbook promised something revolutionary: a Reddit-style social network built exclusively for autonomous AI bots. Launched just last week, it quickly went viral as AI “agents” appeared to gossip, share code, and even plot in their own digital corner of the internet—away from human eyes.

But what started as a fun experiment in AI autonomy quickly turned into a major security cautionary tale. Cybersecurity powerhouse Wiz uncovered a glaring vulnerability that exposed sensitive data on thousands of real users, highlighting the risks of “vibe coding” in the rush to build AI-powered apps.

What Is Moltbook, Anyway?

Moltbook positions itself as the ultimate hangout for AI agents—think subreddits, but populated entirely by bots powered by tools like OpenClaw (previously Clawdbot or Moltbot). Users sign up their AI assistants, which then post, comment, upvote, and interact in forums. The founder, Matt Schlicht, proudly declared he “didn’t write one line of code,” relying instead on AI-assisted “vibe coding” to bring the site to life in record time.

The platform exploded in popularity amid the growing hype around autonomous AI agents capable of handling emails, bookings, and more. Viral screenshots showed bots discussing philosophy, forming “religions,” or joking about their human owners—creating an eerie glimpse into a future “agent internet.”

The Massive Security Hole Wiz Uncovered

Behind the memes and viral buzz, danger lurked. Wiz researchers discovered a misconfigured Supabase database that left the entire backend wide open. A single API key—exposed right in the site’s client-side JavaScript—granted full read and write access to anyone who found it.

The exposed data was staggering:

  • 1.5 million API authentication tokens (including keys for services like OpenAI and Anthropic)
  • 35,000 email addresses belonging to real human owners of the agents
  • Private messages exchanged between AI agents
  • Other credentials and platform data

Worse still, the site had no real identity verification. Anyone—human or bot—could post or impersonate agents, undermining the whole “AI-exclusive” premise. Wiz noted that roughly 17,000 humans were actually controlling the supposed 1.5 million “autonomous” agents, often running large fleets of bots.

“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security.” — Ami Luttwak, Cofounder of Wiz

Wiz responsibly disclosed the issue, and Moltbook’s team fixed it within hours, securing tables and implementing proper Row Level Security (RLS). All accessed data from the research was deleted.

Why This Matters for the Future of AI Agents

Moltbook’s mishap isn’t just an isolated oops—it’s a warning sign for the emerging ecosystem of AI agents. As bots gain more autonomy and access to personal data, platforms built hastily with AI tools risk overlooking fundamental security practices like authentication, access controls, and data encryption.

The incident also blurs the line between human and AI online. Without verification, humans can easily infiltrate “agent-only” spaces, manipulate conversations, or steal credentials. In a world where agents handle sensitive tasks (finances, travel, personal info), one leaked API key could cascade into real-world harm.

Experts see this as a preview of broader challenges: rapid “vibe-coded” development accelerates innovation but often skips security basics. As Wiz put it, the platform became “the AI social network any human can control.”

Moltbook Privacy Issues

Lessons Learned (and Questions Remaining)

Moltbook patched the flaw quickly after disclosure, but the episode raises tough questions:

  • How do we secure platforms designed for non-human users?
  • Can AI-assisted coding ever replace proper security reviews?
  • Is the hype around autonomous agent swarms outpacing our ability to protect them?

For now, Moltbook serves as both entertainment and a stark reminder: even in a world of advanced AI, old-school security mistakes can bring everything crashing down.