According to a new study by MATS and Anthropic, advanced artificial intelligence models such as Claude Opus4.5, Sonnet4.5, and GPT-5 were able to identify and exploit vulnerabilities in smart contracts during controlled testing. The research team used a benchmark test called SCONE-bench, which included 405 real-world smart contract attack cases from 2020 to 2025. The losses generated by these models in simulated attacks reached up to $4.6 million.

image.png

In another experiment, AI agents reviewed 2,849 new smart contracts and identified two previously unknown vulnerabilities. GPT-5 generated $3,694 in simulated revenue, while the API usage cost was approximately $3,476, resulting in an average net profit of $109 per attack. All experiments were conducted in isolated sandbox environments to ensure security.

Researchers noted that although these findings highlight real security risks, they also indicate that these models can be used to develop stronger defense tools. A recent study released by Anthropic also shows that artificial intelligence systems can play an important role in improving cybersecurity.

Key points:

🔍 The study shows that advanced AI models such as Claude Opus4.5 and GPT-5 are capable of identifying and exploiting smart contract vulnerabilities.

💸 Simulated attack losses reached up to $4.6 million, and AI models also found new security vulnerabilities in experiments.

🔒 AI is not only a potential risk source but can also be used to strengthen cybersecurity measures.