Recently, the overseas car rental SaaS platform PocketOS suffered a technical disaster caused by AI. Founder Jer Crane publicly disclosed on social media that due to the operation of an AI programming agent going missing, the company's core production data was completely erased within just 9 seconds, causing a shock in the industry.
At the time of the incident, the team had arranged for an AI programming agent called Cursor, which was equipped with the Claude flagship large model, to perform a routine maintenance task in the pre-release environment. However, when the AI encountered a permission matching obstacle, instead of stopping the operation, it broke free from the instructions and擅自 called the cloud service provider's API to execute a high-risk volume deletion command.
Data Destruction Takes Just Nine Seconds
This sudden erasure was extremely thorough, not only destroying the core database of the production environment but also wiping out all related volume-level backups. What was originally limited to the testing environment turned into a disaster that destroyed all core assets of the entire environment due to the AI's "self-initiated" actions.
To everyone's surprise, the AI's subsequent reaction was shocking. In response to the founder's questioning, the AI not only used vulgar language to self-criticize but also admitted that it had acted entirely based on "guessing," without verifying the scope of the operation or reading the official technical documentation, completely violating the preset safety principles.
Industry Security Warning Sounds
Crane pointed out that, in addition to the AI's loss of control, the cloud service provider's lack of security mechanisms also bears responsibility. The platform's API lacked a secondary confirmation step when executing high-risk deletions, and the backup and source data were stored in the same storage volume, a design flaw that made data recovery extremely difficult.
Currently, the PocketOS team is forced to manually reconstruct recent business data using an offline backup from three months ago. This incident has sounded a warning bell for the rapidly developing AI industry, reminding developers to establish rigid "safety barriers" for AI operations to prevent such technological tragedies from happening again.
