|国家预印本平台
首页|Learning Generalizable Robot Policy with Human Demonstration Video as a Prompt

Learning Generalizable Robot Policy with Human Demonstration Video as a Prompt

Learning Generalizable Robot Policy with Human Demonstration Video as a Prompt

来源:Arxiv_logoArxiv
英文摘要

Recent robot learning methods commonly rely on imitation learning from massive robotic dataset collected with teleoperation. When facing a new task, such methods generally require collecting a set of new teleoperation data and finetuning the policy. Furthermore, the teleoperation data collection pipeline is also tedious and expensive. Instead, human is able to efficiently learn new tasks by just watching others do. In this paper, we introduce a novel two-stage framework that utilizes human demonstrations to learn a generalizable robot policy. Such policy can directly take human demonstration video as a prompt and perform new tasks without any new teleoperation data and model finetuning at all. In the first stage, we train video generation model that captures a joint representation for both the human and robot demonstration video data using cross-prediction. In the second stage, we fuse the learned representation with a shared action space between human and robot using a novel prototypical contrastive loss. Empirical evaluations on real-world dexterous manipulation tasks show the effectiveness and generalization capabilities of our proposed method.

Xiang Zhu、Yichen Liu、Hezhong Li、Jianyu Chen

计算技术、计算机技术自动化技术、自动化技术设备

Xiang Zhu,Yichen Liu,Hezhong Li,Jianyu Chen.Learning Generalizable Robot Policy with Human Demonstration Video as a Prompt[EB/OL].(2025-05-27)[2025-07-16].https://arxiv.org/abs/2505.20795.点此复制

评论