|国家预印本平台
首页|EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video

EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video

EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video

来源:Arxiv_logoArxiv
英文摘要

Imitation learning for manipulation has a well-known data scarcity problem. Unlike natural language and 2D computer vision, there is no Internet-scale corpus of data for dexterous manipulation. One appealing option is egocentric human video, a passively scalable data source. However, existing large-scale datasets such as Ego4D do not have native hand pose annotations and do not focus on object manipulation. To this end, we use Apple Vision Pro to collect EgoDex: the largest and most diverse dataset of dexterous human manipulation to date. EgoDex has 829 hours of egocentric video with paired 3D hand and finger tracking data collected at the time of recording, where multiple calibrated cameras and on-device SLAM can be used to precisely track the pose of every joint of each hand. The dataset covers a wide range of diverse manipulation behaviors with everyday household objects in 194 different tabletop tasks ranging from tying shoelaces to folding laundry. Furthermore, we train and systematically evaluate imitation learning policies for hand trajectory prediction on the dataset, introducing metrics and benchmarks for measuring progress in this increasingly important area. By releasing this large-scale dataset, we hope to push the frontier of robotics, computer vision, and foundation models.

Ryan Hoque、Peide Huang、David J. Yoon、Mouli Sivapurapu、Jian Zhang

计算技术、计算机技术自动化技术、自动化技术设备

Ryan Hoque,Peide Huang,David J. Yoon,Mouli Sivapurapu,Jian Zhang.EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video[EB/OL].(2025-05-16)[2025-07-02].https://arxiv.org/abs/2505.11709.点此复制

评论