|国家预印本平台
首页|Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges

Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges

Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges

来源:Arxiv_logoArxiv
英文摘要

We propose a new approach to vision-based dexterous grasp translation, which aims to transfer grasp intent across robotic hands with differing morphologies. Given a visual observation of a source hand grasping an object, our goal is to synthesize a functionally equivalent grasp for a target hand without requiring paired demonstrations or hand-specific simulations. We frame this problem as a stochastic transport between grasp distributions using the Schr\"odinger Bridge formalism. Our method learns to map between source and target latent grasp spaces via score and flow matching, conditioned on visual observations. To guide this translation, we introduce physics-informed cost functions that encode alignment in base pose, contact maps, wrench space, and manipulability. Experiments across diverse hand-object pairs demonstrate our approach generates stable, physically grounded grasps with strong generalization. This work enables semantic grasp transfer for heterogeneous manipulators and bridges vision-based grasping with probabilistic generative modeling.

Tao Zhong、Jonah Buchanan、Christine Allen-Blanchette

计算技术、计算机技术自动化基础理论

Tao Zhong,Jonah Buchanan,Christine Allen-Blanchette.Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\"odinger Bridges[EB/OL].(2025-06-03)[2025-06-21].https://arxiv.org/abs/2506.02489.点此复制

评论