|国家预印本平台
首页|RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping

RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping

RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping

来源:Arxiv_logoArxiv
英文摘要

This work presents reinforcement learning (RL)-driven data augmentation to improve the generalization of vision-action (VA) models for dexterous grasping. While real-to-sim-to-real frameworks, where a few real demonstrations seed large-scale simulated data, have proven effective for VA models, applying them to dexterous settings remains challenging: obtaining stable multi-finger contacts is nontrivial across diverse object shapes. To address this, we leverage RL to generate contact-rich grasping data across varied geometries. In line with the real-to-sim-to-real paradigm, the grasp skill is formulated as a parameterized and tunable reference trajectory refined by a residual policy learned via RL. This modular design enables trajectory-level control that is both consistent with real demonstrations and adaptable to diverse object geometries. A vision-conditioned policy trained on simulation-augmented data demonstrates strong generalization to unseen objects, highlighting the potential of our approach to alleviate the data bottleneck in training VA models.

Kazuhiro Sasabuchi、Jun Takamatsu、Atsushi Kanehira、Naoki Wake、Katsushi Ikeuchi

计算技术、计算机技术

Kazuhiro Sasabuchi,Jun Takamatsu,Atsushi Kanehira,Naoki Wake,Katsushi Ikeuchi.RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping[EB/OL].(2025-04-25)[2025-06-06].https://arxiv.org/abs/2504.18084.点此复制

评论