Meta AI released the latest model in the Segment Anything series, SAM3D, offering two sets of weights: SAM3D Objects for general objects/scenes and SAM3D Body for human image reconstruction. Both only require a single 2D photo to output 3D assets with texture, material, and geometric consistency, and significantly outperform existing NeRF and Gaussian Splatting solutions on real-world images.

SAM3D core adopts "spatial position - semantic" joint encoding, predicting 3D coordinates and surface normals for each pixel, making the model physically accurate and directly applicable to AR/VR, robotics, and film post-production. Meta simultaneously open-sources weights, inference code, and evaluation benchmarks, and has launched the "View in Room" feature on Facebook Marketplace, allowing users to project product 3D models into their own rooms for previewing.
Official tests show that SAM3D Objects reduced Chamfer Distance by 28% on public datasets and improved normal consistency by 19%; SAM3D Body achieved 14% better MPJPE metrics than existing state-of-the-art single-image methods on the AGORA-3D benchmark and supports one-click binding with Mixamo skeleton drivers.
Meta revealed that the model is integrated into Quest3 and Horizon Worlds creation tools, and developers can call the API through the Edits and Vibes apps, with a billing rate of $0.02 per model, and will release a real-time mobile inference SDK in Q1 of 2026.
Project address: https://ai.meta.com/blog/sam-3d/
