On March 24, a major security incident occurred in the AI open-source ecosystem. The well-known Python library litellm was implanted with malicious code on the PyPI platform, constituting a typical supply chain attack. The attack does not require active invocation; simply installing the library can trigger it, resulting in a wide-ranging impact.

Core of the Incident: litellm was implanted with an automatically executed backdoor

The affected version is 1.82.8 (released at 10:52 UTC), which includes a malicious file named litellm_init.pth. This file automatically loads and executes every time a Python process starts. Even if developers never manually import litellm, if their project indirectly depends on this library, they will be immediately compromised. The subsequent version 1.82.7 (released at 10:39 UTC) was also contaminated.

image.png

Why is litellm a high-value target?

litellm is a Python library that unifies the calling of APIs from multiple large model providers. It has over 40,000 stars on GitHub and monthly downloads exceeding 95 million. Over 2,000 open-source packages have listed it as a dependency, including mainstream AI toolchains such as DSPy, MLflow, and Open Interpreter. Many developers may have never actively installed it but may have unknowingly introduced this risk point.

Malicious Code Behavior: Systematic Theft of Sensitive Credentials

The malicious payload scans and steals sensitive information from the host, including:

  • SSH keys
  • AWS/GCP/Azure cloud credentials
  • Kubernetes keys
  • Environment variable files
  • Database configurations
  • Cryptocurrency wallets

The data is encrypted and packaged before being sent to a domain controlled by the attacker. If the Kubernetes environment is detected, the malicious code will also use service account tokens to automatically deploy privileged Pods across cluster nodes, enabling horizontal spread and further amplifying the threat.

The Discovery Process Is Highly Ironical: The Attacker "Bug" Exposed Itself

The exposure of the attack originated from an accidental fork bomb. Researchers were using the MCP plugin in the Cursor editor when the malicious .pth file triggered repeatedly in a Python sub-process due to the plugin's indirect dependency on litellm, causing memory to be exhausted instantly. This "self-destruction" quickly brought the incident to light. Renowned AI expert Andre Karpathy pointed out that had the attacker not made this mistake in their code, the poisoning might have remained undetected for days or even weeks.

Attack Chain Tracing: Trivy Is the Starting Point of the Supply Chain Collapse

The root cause directly points to litellm's CI/CD process — it used the Trivy vulnerability scanning tool. Trivy had already been compromised by the same attack group, TeamPCP, as early as March 19. Attackers stole litellm's PyPI publishing token through the compromised Trivy and directly pushed the malicious version. Previously, on March 23, Checkmarx KICS was also attacked by the same group. Security researcher Gal Nagli commented that the open-source supply chain has now experienced a chain reaction collapse. The compromise of Trivy directly led to litellm being compromised, with credentials from tens of thousands of production environments falling into the hands of attackers and becoming new ammunition for future attacks.

The Attacker's "Silencing" Operation Failed

After the first issue reports appeared on GitHub, the attacker used 73 stolen accounts to post 88 spam comments within 102 seconds, attempting to drown out the discussion, and then used stolen maintainer permissions to forcibly close the issue. The community quickly moved the discussion to the Hacker News platform and continued tracking the event.

Expert Opinion: Supply Chain Attacks Are the Most Terrifying Invisible Threat

Karpathy reiterated the risks of software dependencies through this incident: "Every time we introduce an external package, we may be planting a time bomb that could be poisoned deep in the dependency tree." He stated that he will increasingly prefer to let large models generate simple function code directly rather than rely on third-party libraries in the future.

Urgent Security Recommendations

AIbase reminds all AI developers:

  1. Immediately run pip show litellm to check the version; the last safe version is 1.82.6;
  2. If versions 1.82.7 or 1.82.8 are found, consider all credentials as leaked, immediately rotate them all including SSH keys, cloud credentials, and K8s tokens;
  3. Clean up the affected environment, rebuild containers or virtual machines, and enhance supply chain audits.

This incident once again sounds the alarm for open-source supply chain security. In today's era where AI toolchains heavily rely on third-party libraries, every dependency introduced must be approached with the highest level of vigilance.