LangToMo: A hierarchical VLA framework for robot control that uses intermediate motion representations.
We present LangToMo, a vision-language-action framework structured as a dual-system architecture that uses pixel motion forecasts as intermediate representations. Our high-level System 2, an image diffusion model, generates text-conditioned pixel motion sequences from a single frame to guide robot control. Pixel motion—a universal, interpretable, and motion-centric representation—can be extracted from videos in a self-supervised manner, enabling diffusion model training on web-scale video-caption data. Treating generated pixel motion as learned universal representations, our low level System 1 module translates these into robot actions via motion-to-action mapping functions, which can be either hand-crafted or learned with minimal supervision. System 2 operates as a high-level policy applied at sparse temporal intervals, while System 1 acts as a low-level policy at dense temporal intervals. This hierarchical decoupling enables flexible, scalable, and generalizable robot control under both unsupervised and supervised settings, bridging the gap between language, motion, and action.
We learn to forecast pixel motion as universal motion features from video-caption pairs using scalable, self-supervised training of a diffusion model. This diffusion model is used as our System 2 which predicts the next action as pixel motions given the initial observation and goal condition. This System 2 forecasts motion at sparse intervals, while System 1 maps it to dense action vectors.
We report the mean success rate across tasks. Each entry of the table shows the average success rate aggregated from 3 camera poses with 25 seeds for each camera pose.
Method | Video Only Training | T1 | T2 | T3 | T4 | Avg |
---|---|---|---|---|---|---|
RT-2 Style | ✗ | 0 | 0 | 0 | 0 | 0 |
LLaRA | ✗ | 70 | 80 | 55 | 55 | 65.0 |
AVDC | ✓ | 10 | 20 | 0 | 0 | 15.0 |
LTM-H (Ours) | ✓ | 80 | 70 | 65 | 60 | 68.8 |
Real World Task Performance: We follow the setup in LLaRA to evaluate model performance on real world tasks under fine-tuned settings.
Method | Video Only Training | T1 | T2 | T3 | T4 | Avg |
---|---|---|---|---|---|---|
RT-2 Style | ✗ | 0 | 0 | 0 | 0 | 0 |
LLaRA | ✗ | 40 | 20 | 10 | 20 | 22.5 |
AVDC | ✓ | 0 | 0 | 0 | 0 | 0 |
GPT-4o | ✓ | 20 | 30 | 10 | 15 | 18.8 |
LTM-H (Ours) | ✓ | 40 | 30 | 35 | 30 | 33.8 |
Zero-Shot Transfer on Real World Tasks: Evaluations follow settings in LLaRA.
See LangToMo's performance on the challenging Meta World benchmark tasks.
See LangToMo in action with real-world robotic applications and scenarios.