|国家预印本平台
首页|ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

来源:Arxiv_logoArxiv
英文摘要

One of the central challenges preventing robots from acquiring complex manipulation skills is the prohibitive cost of collecting large-scale robot demonstrations. In contrast, humans are able to learn efficiently by watching others interact with their environment. To bridge this gap, we introduce semantic action flow as a core intermediate representation capturing the essential spatio-temporal manipulator-object interactions, invariant to superficial visual differences. We present ViSA-Flow, a framework that learns this representation self-supervised from unlabeled large-scale video data. First, a generative model is pre-trained on semantic action flows automatically extracted from large-scale human-object interaction video data, learning a robust prior over manipulation structure. Second, this prior is efficiently adapted to a target robot by fine-tuning on a small set of robot demonstrations processed through the same semantic abstraction pipeline. We demonstrate through extensive experiments on the CALVIN benchmark and real-world tasks that ViSA-Flow achieves state-of-the-art performance, particularly in low-data regimes, outperforming prior methods by effectively transferring knowledge from human video observation to robotic execution. Videos are available at https://visaflow-web.github.io/ViSAFLOW.

Changhe Chen、Quantao Yang、Xiaohao Xu、Nima Fazeli、Olov Andersson

计算技术、计算机技术自动化技术、自动化技术设备

Changhe Chen,Quantao Yang,Xiaohao Xu,Nima Fazeli,Olov Andersson.ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow[EB/OL].(2025-05-02)[2025-06-05].https://arxiv.org/abs/2505.01288.点此复制

评论