|国家预印本平台
首页|Masquerade: Learning from In-the-wild Human Videos using Data-Editing

Masquerade: Learning from In-the-wild Human Videos using Data-Editing

Masquerade: Learning from In-the-wild Human Videos using Data-Editing

来源:Arxiv_logoArxiv
英文摘要

Robot manipulation research still suffers from significant data scarcity: even the largest robot datasets are orders of magnitude smaller and less diverse than those that fueled recent breakthroughs in language and vision. We introduce Masquerade, a method that edits in-the-wild egocentric human videos to bridge the visual embodiment gap between humans and robots and then learns a robot policy with these edited videos. Our pipeline turns each human video into robotized demonstrations by (i) estimating 3-D hand poses, (ii) inpainting the human arms, and (iii) overlaying a rendered bimanual robot that tracks the recovered end-effector trajectories. Pre-training a visual encoder to predict future 2-D robot keypoints on 675K frames of these edited clips, and continuing that auxiliary loss while fine-tuning a diffusion policy head on only 50 robot demonstrations per task, yields policies that generalize significantly better than prior work. On three long-horizon, bimanual kitchen tasks evaluated in three unseen scenes each, Masquerade outperforms baselines by 5-6x. Ablations show that both the robot overlay and co-training are indispensable, and performance scales logarithmically with the amount of edited human video. These results demonstrate that explicitly closing the visual embodiment gap unlocks a vast, readily available source of data from human videos that can be used to improve robot policies.

Marion Lepert、Jiaying Fang、Jeannette Bohg

计算技术、计算机技术

Marion Lepert,Jiaying Fang,Jeannette Bohg.Masquerade: Learning from In-the-wild Human Videos using Data-Editing[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.09976.点此复制

评论