Recently, the Model Context Protocol (MCP), an industry-standard communication protocol developed and maintained by AI giant Anthropic, has faced serious security challenges. A security research team, OX Security, released a report indicating that the protocol has fundamental design flaws at the architectural level, which could lead to servers being tricked into executing arbitrary code (RCE). So far, 10 CVE identifiers have been associated with this issue, and the number continues to grow.
As an open protocol aimed at standardizing communication between AI models and external data sources, MCP had previously been favored and integrated by major companies such as Microsoft and Google. However, OX Security discovered in a study on April 15 that the vulnerability was not a simple coding oversight but deeply embedded in the official SDK. This means that MCP projects built using Python, TypeScript, Java, or Rust are all vulnerable and exposed to risk.
Researchers, through testing, identified four mainstream attack vectors: unauthenticated UI injection, bypassing security hardening, prompt injection, and malicious plugin distribution. Several major open-source projects, including LiteLLM, LangChain, and IBM LangFlow, have been confirmed to have critical vulnerabilities and have been successfully exploited in real production environments. This discovery has effectively dropped a heavy bomb into the rapidly developing field of AI infrastructure.
In response to the research team's feedback, Anthropic's attitude has sparked widespread discussion in the industry. It is reported that the research team had attempted multiple times to communicate and urge the company to fix the architectural flaw, but Anthropic refused to modify the underlying architecture and responded that this behavior was "intended design." Subsequently, the research team decided to disclose the findings to the public, with the consent of the other party, to remind developers to take preventive measures.
In response to the current risks, security experts have issued urgent recommendations to users and developers: Do not directly expose large language models and related AI tools to the public network. When processing MCP input data, treat it as an untrusted source and strictly prevent prompt injection attacks. Additionally, it is recommended that all services based on MCP should run in a strict sandbox environment and update relevant software promptly to tighten system permissions as much as possible.
