According to AIbase, Microsoft recently issued an important security warning regarding its artificial intelligence assistant, OpenClaw, clearly stating that the tool should not be run on standard personal or corporate workstations but should only be deployed in fully isolated environments.
OpenClaw is designed as an AI agent capable of performing tasks autonomously. To achieve automated operations, users must grant it full access to computer systems and software, including email, files, online services, and login credentials. This "high-privilege + persistent state (memory)" operating mode makes it powerful but also poses significant security risks.
The Microsoft Defender Security Research team stated in their official blog that OpenClaw should be considered an "untrusted code execution with persistent credentials." Once exploited, attackers could not only steal credentials and sensitive data but also manipulate the agent's persistent memory, making it execute malicious instructions in subsequent runs.

Microsoft disclosed that OpenClaw currently faces two core threats:
First, Indirect Prompt Injection.
Attackers can hide malicious instructions within the content that the agent reads, thereby controlling its tool calls or tampering with its memory, which can have long-term effects on its behavior. If strict security boundaries are not set, the agent may unknowingly perform actions according to the attacker's intent.
Second, Skill-based Malware.
OpenClaw can download and run code from the internet to expand its functionality, a mechanism that could become an attack vector. Attackers can deliver "skill" modules containing malicious code to achieve remote control or implant backdoors. Microsoft emphasized that successful attacks do not necessarily rely on traditional malware but could also be achieved through subtle configuration changes.
The security risks are not theoretical. Recently, the STRIKE threat intelligence team under SecurityScorecard found that OpenClaw's control panel was exposed on over 42,000 independent IP addresses across 82 countries, with approximately 50,000 instances having remote code execution (RCE) vulnerabilities, allowing attackers to directly control the host system, putting user accounts at risk of being compromised.
Given these risks, Microsoft recommends that organizations conduct testing and evaluation in dedicated virtual machines or independent physical systems, using dedicated, non-privileged credentials for the runtime environment, limiting access to non-sensitive data, and establishing continuous monitoring and regular reconstruction mechanisms to avoid direct deployment in core production systems.
As autonomous agent-type AI tools gradually enter practical application scenarios, their security boundaries and governance frameworks are becoming issues that the industry must address. The case of OpenClaw once again reminds enterprises: while embracing AI's automation capabilities, they must simultaneously build strict isolation, permission, and monitoring frameworks; otherwise, powerful execution capabilities could also become an entry point for attacks.
