Pixel Motion as Universal Representation for Robot Control

LangToMo: A hierarchical VLA framework for robot control that uses intermediate motion representations.

LangToMo Framework

Abstract

We present LangToMo, a vision-language-action framework structured as a dual-system architecture that uses pixel motion forecasts as intermediate representations. Our high-level System 2, an image diffusion model, generates text-conditioned pixel motion sequences from a single frame to guide robot control. Pixel motion—a universal, interpretable, and motion-centric representation—can be extracted from videos in a self-supervised manner, enabling diffusion model training on web-scale video-caption data. Treating generated pixel motion as learned universal representations, our low level System 1 module translates these into robot actions via motion-to-action mapping functions, which can be either hand-crafted or learned with minimal supervision. System 2 operates as a high-level policy applied at sparse temporal intervals, while System 1 acts as a low-level policy at dense temporal intervals. This hierarchical decoupling enables flexible, scalable, and generalizable robot control under both unsupervised and supervised settings, bridging the gap between language, motion, and action.

LangToMo Architecture

We learn to forecast pixel motion as universal motion features from video-caption pairs using scalable, self-supervised training of a diffusion model. This diffusion model is used as our System 2 which predicts the next action as pixel motions given the initial observation and goal condition. This System 2 forecasts motion at sparse intervals, while System 1 maps it to dense action vectors.

Evaluations

Results on MetaWorld Environment

Method door-open door-close basketball shelf-place btn-press btn-top faucet-close faucet-open handle-press hammer assembly Overall
BC-Scratch 21.3 36.0 0.0 0.0 34.7 12.0 18.7 17.3 37.3 0.0 1.3 16.2
BC-R3M 1.3 58.7 0.0 0.0 36.0 4.0 18.7 22.7 28.0 0.0 0.0 15.4
UniPi (With Replan) 0.0 36.0 0.0 0.0 6.7 0.0 4.0 9.3 13.3 4.0 0.0 6.1
AVDC (Flow) 0.0 0.0 0.0 0.0 1.3 40.0 42.7 0.0 66.7 0.0 0.0 13.7
AVDC (Default) 72.0 89.3 37.3 18.7 60.0 24.0 53.3 24.0 81.3 8.0 6.7 43.1
LTM-H (Ours) 76.0 94.7 38.0 15.2 82.0 84.7 41.3 33.3 97.3 4.2 6.9 52.1
LTM-S (Ours) 77.3 95.0 39.0 18.7 82.0 84.3 46.7 35.3 98.0 6.7 6.9 53.6

We report the mean success rate across tasks. Each entry of the table shows the average success rate aggregated from 3 camera poses with 25 seeds for each camera pose.

Results on Real World Environment

Method Video Only Training T1 T2 T3 T4 Avg
RT-2 Style 0 0 0 0 0
LLaRA 70 80 55 55 65.0
AVDC 10 20 0 0 15.0
LTM-H (Ours) 80 70 65 60 68.8

Real World Task Performance: We follow the setup in LLaRA to evaluate model performance on real world tasks under fine-tuned settings.

Method Video Only Training T1 T2 T3 T4 Avg
RT-2 Style 0 0 0 0 0
LLaRA 40 20 10 20 22.5
AVDC 0 0 0 0 0
GPT-4o 20 30 10 15 18.8
LTM-H (Ours) 40 30 35 30 33.8

Zero-Shot Transfer on Real World Tasks: Evaluations follow settings in LLaRA.

Demonstrations

Meta World Benchmark

See LangToMo's performance on the challenging Meta World benchmark tasks.

Task Goal: Open the door with handle.

Input Video

Predicted Pixel Motions

Generated Actions

Task Goal: Open the door with handle.

Input Video

Predicted Pixel Motions

Generated Actions

Task Goal: Open the door with handle.

Input Video

Predicted Pixel Motions

Generated Actions

Real World Demonstrations

See LangToMo in action with real-world robotic applications and scenarios.

Task Goal: Place the strawberry in the tray.

Input Video

Predicted Pixel Motions

Generated Actions

Task Goal: Place the corn in the bowl.

Input Video

Predicted Pixel Motions

Generated Actions

Task Goal: Place the duck in the tray.

Input Video

Predicted Pixel Motions

Generated Actions

View More Demonstrations