AMD has officially launched an open-source framework called OpenClaw, along with two hardware reference configurations: RyzenClaw and RadeonClaw. This move aims to promote its "Agent Computer" initiative, allowing developers to run large language models and multi-agent workflows on local PCs rather than in the cloud, thereby enhancing privacy security while reducing reliance on internet connections and subscription services.

Currently, OpenClaw mainly runs on the Windows platform via WSL2, using LM Studio with a llama.cpp backend for local inference. The framework supports running advanced models including Qwen3.535B, and it includes an embedded memory framework called Memory.md, ensuring that context information is securely stored in local hardware.
For different computing power needs, AMD provides two technical paths. The RyzenClaw solution is centered around the Ryzen AI Max+ processor, equipped with 128GB unified memory, capable of supporting up to six local AI agents simultaneously, and offering an extended context window of about 260,000 tokens. The RadeonClaw solution focuses on inference speed, using a Radeon AI PRO R9700 graphics card with 32GB of VRAM, reducing the processing time for 10,000 tokens to just 4.4 seconds.
Although these configurations currently have a high starting price (the RyzenClaw system costs approximately $2,700), targeting mainly engineers and early adopters, AMD's move clearly sends a signal: future personal computing will no longer be just an information processing center, but also an autonomous and controllable AI agent platform. By bringing data center-level AI processing capabilities back to the desktop, AMD is trying to occupy a key position in the distributed AI ecosystem.
