The "double-edged sword" effect of generative AI has once again raised global concerns. According to the latest report from the Japanese police, Japan's largest chain internet café, "Kaikatsu Club," suffered a large-scale data breach in January this year. The culprit behind the incident was a 17-year-old high school sophomore. The teenager bypassed ChatGPT's security measures and used AI-assisted coding to create malicious programs, successfully stealing about 7.25 million pieces of member personal information, including names, addresses, phone numbers, and membership numbers.

 AI as a "Hacker Training Ground"? Teen Bypasses Platform Protections to Write Attack Tools

Investigations revealed that the suspect is from the Kansai region of Osaka. Although not a professional hacker, he self-taught programming from a young age and won awards in information security competitions, possessing a solid technical foundation. His attack methods were as follows:  

- Sending forged API commands to the Kaikatsu Club server to bypass the authentication mechanism; 

- Using automated scripts to extract member database content in bulk; 

- The attack caused part of the company's systems to temporarily shut down, causing serious operational damage.

The key point is that the core code of the malicious program was generated by ChatGPT. Although major AI platforms have deployed multiple protections to prevent the generation of malicious software, the teenager successfully induced the AI to produce practical attack payloads by "rephrasing" his requests (such as describing the attack code as a "penetration testing script" or a "security research tool").

 The Motive Was Surprising: Stealing Credit Card Information to Buy Pokémon Cards

Police revealed that the teenager's initial goal was to obtain credit card information linked to members, in order to purchase Pokémon cards online. After being arrested, he claimed he was merely "testing website vulnerabilities," but the police pointed out that his actions had clear illegal intent and did not follow proper vulnerability disclosure procedures, constituting criminal offenses such as unauthorized electronic record manipulation.

 Experts Warn: AI Is Significantly Lowering the Bar for Cyber Attacks

Cybersecurity experts pointed out that this case highlights major security risks posed by generative AI in the field of security: 

- Democratization of technology = democratization of crime? Hacking skills that once required years of experience can now be replicated in a few hours with AI; 

- The flexibility of natural language makes AI restriction mechanisms easy to bypass — as long as users continuously adjust their prompts, the AI will eventually "compromise"; 

- Even if platforms strengthen filtering, the "AI + human fine-tuning" model can still generate highly隐蔽 malicious code.

"We are entering an era where anyone can be a hacker," said an anonymous security researcher. "Defenders need to shift from 'patching vulnerabilities' to 'predicting AI abuse patterns.'"

 AIbase Observation: When AI Becomes a Criminal "Accomplice," the Security Boundary Needs Rebuilding

This case is not an isolated incident. Since 2025, there have been multiple reports globally of cases using AI-generated phishing emails, ransomware, and even deepfake fraud tools. Regulatory agencies and tech companies face a dilemma:

- If restrictions are too strict, it will harm the legitimate use of AI in fields like security research and education;

- If left unchecked, it may lead to a new generation of "AI-enabled crimes."

Experts call on platforms to establish traceability and audit mechanisms for AI-generated code, companies to enhance API security and monitor abnormal behavior, and laws to clarify the legal responsibilities of inducing AI to generate malicious content.