RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics
Abstract
RoboTracer, a 3D-aware visual language model, enhances spatial tracing by combining supervised and reinforcement fine-tuning with a universal spatial encoder and regression-supervised decoder, achieving state-of-the-art performance on TraceSpatial-Bench.
Spatial tracing, as a fundamental embodied interaction ability for robots, is inherently challenging as it requires multi-step metric-grounded reasoning compounded with complex spatial referring and real-world metric measurement. However, existing methods struggle with this compositional task. To this end, we propose RoboTracer, a 3D-aware VLM that first achieves both 3D spatial referring and measuring via a universal spatial encoder and a regression-supervised decoder to enhance scale awareness during supervised fine-tuning (SFT). Moreover, RoboTracer advances multi-step metric-grounded reasoning via reinforcement fine-tuning (RFT) with metric-sensitive process rewards, supervising key intermediate perceptual cues to accurately generate spatial traces. To support SFT and RFT training, we introduce TraceSpatial, a large-scale dataset of 30M QA pairs, spanning outdoor/indoor/tabletop scenes and supporting complex reasoning processes (up to 9 steps). We further present TraceSpatial-Bench, a challenging benchmark filling the gap to evaluate spatial tracing. Experimental results show that RoboTracer surpasses baselines in spatial understanding, measuring, and referring, with an average success rate of 79.1%, and also achieves SOTA performance on TraceSpatial-Bench by a large margin, exceeding Gemini-2.5-Pro by 36% accuracy. Notably, RoboTracer can be integrated with various control policies to execute long-horizon, dynamic tasks across diverse robots (UR5, G1 humanoid) in cluttered real-world scenes.
Community
Project Page: https://zhoues.github.io/RoboTracer/
We present RoboTracer, the first 3D-aware VLM for multi-step metric-grounded spatial tracing with explicit reasoning.
Highlights:
RoboTracer first acquires both 3D spatial referring and measuring via SFT, and further advances multi-step metric-grounded spatial tracing via RFT.
To support SFT and RFT training, we introduce TraceSpatial, a large-scale dataset of 30M QA pairs, spanning outdoor/indoor/tabletop scenes, and containing complex reasoning processes (up to 9 steps).
SFT-trained RoboTracer achieves SOTA spatial understanding/measuring/referring, and RFT-trained RoboTracer exhibits strong spatial tracing under novel cluttered and dynamic scenes with complex reasoning processes.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper


