At CES 2026, NVIDIA systematically showcased its latest layout in the field of robotics: from foundational models and simulation platforms to edge computing hardware, a complete technical stack for "physical AI" was officially unveiled. NVIDIA clearly sent a signal - its goal is not to empower single points, but to become the default platform for general-purpose robots, similar to how Android became the operating system in the smartphone era.
This strategy is backed by a shift in the focus of the entire AI industry. As sensor costs continue to drop, simulation technology matures, and models with cross-task generalization capabilities keep emerging, artificial intelligence is moving from cloud-based reasoning to robotic systems that can perceive, think, and act in the real physical world. NVIDIA is trying to occupy the fundamental entry point in this transformation.

According to AIbase, NVIDIA disclosed the core details of its "physical AI" full-stack ecosystem during CES. The company released a batch of open-source robot foundation models, supporting robots to perform reasoning, planning, and adaptation across multiple tasks and environments, breaking the previous limitations of only executing single, fixed tasks. The related models have been simultaneously uploaded to Hugging Face, further lowering the development barrier.
Looking specifically, the new model system includes several key components: Cosmos Transfer 2.5 and Cosmos Predict 2.5 as world models, used to generate synthetic data and evaluate robot strategies in simulation environments; Cosmos Reason 2 is a reasoning-type visual language model (VLM), enabling AI to "understand" the physical world and act accordingly; while Isaac GR00T N1.6 is a new visual language action (VLA) model for humanoid robots. GR00T uses Cosmos Reason as its core brain, achieving full-body control of humanoid robots, allowing them to perform complex operations while moving.
In terms of software and training, NVIDIA also introduced Isaac Lab-Arena. This open-source simulation framework hosted on GitHub has become an important part of its physical AI platform, safely testing robot capabilities in virtual environments. The platform aims to solve a long-standing industry pain point: as robot task complexity increases, real-world testing for precision grasping to cable installation is often costly, time-consuming, and risky. Isaac Lab-Arena integrates task scenarios, training tools, and mature benchmarks such as Libero, RoboCasa, and RoboTwin, providing the industry with a unified, reusable evaluation and training standard for the first time.
The system is supported by Nvidia OSMO - an open-source "command center." OSMO connects the entire process from data generation, simulation to model training, covering desktop and cloud environments, providing unified scheduling and management capabilities for robot development.
On the hardware side, NVIDIA also released the Jetson T4000 graphics card using the Blackwell core, belonging to the Jetson Thor series. This product is positioned as a high-cost-performance edge-side computing upgrade solution, offering up to 1200 trillion floating-point operations per second of AI computing power within a power range of 40 to 70 watts, and comes with 64GB of memory, targeting complex robot reasoning and control scenarios.
NVIDIA has also deepened its collaboration with Hugging Face. The companies integrated Isaac and GR00T technologies into Hugging Face's LeRobot framework, connecting NVIDIA's approximately 2 million robot developers with Hugging Face's over 13 million AI builders. The open-source humanoid robot Reachy2 on the Hugging Face platform can now run directly on the Jetson Thor chip, allowing developers to test different models and algorithms without relying on closed systems.
