|国家预印本平台
首页|GraspMolmo: Generalizable Task-Oriented Grasping via Large-Scale Synthetic Data Generation

GraspMolmo: Generalizable Task-Oriented Grasping via Large-Scale Synthetic Data Generation

GraspMolmo: Generalizable Task-Oriented Grasping via Large-Scale Synthetic Data Generation

来源:Arxiv_logoArxiv
英文摘要

We present GrasMolmo, a generalizable open-vocabulary task-oriented grasping (TOG) model. GraspMolmo predicts semantically appropriate, stable grasps conditioned on a natural language instruction and a single RGB-D frame. For instance, given "pour me some tea", GraspMolmo selects a grasp on a teapot handle rather than its body. Unlike prior TOG methods, which are limited by small datasets, simplistic language, and uncluttered scenes, GraspMolmo learns from PRISM, a novel large-scale synthetic dataset of 379k samples featuring cluttered environments and diverse, realistic task descriptions. We fine-tune the Molmo visual-language model on this data, enabling GraspMolmo to generalize to novel open-vocabulary instructions and objects. In challenging real-world evaluations, GraspMolmo achieves state-of-the-art results, with a 70% prediction success on complex tasks, compared to the 35% achieved by the next best alternative. GraspMolmo also successfully demonstrates the ability to predict semantically correct bimanual grasps zero-shot. We release our synthetic dataset, code, model, and benchmarks to accelerate research in task-semantic robotic manipulation, which, along with videos, are available at https://abhaybd.github.io/GraspMolmo/.

Abhay Deshpande、Yuquan Deng、Arijit Ray、Jordi Salvador、Winson Han、Jiafei Duan、Kuo-Hao Zeng、Yuke Zhu、Ranjay Krishna、Rose Hendrix

计算技术、计算机技术自动化技术、自动化技术设备

Abhay Deshpande,Yuquan Deng,Arijit Ray,Jordi Salvador,Winson Han,Jiafei Duan,Kuo-Hao Zeng,Yuke Zhu,Ranjay Krishna,Rose Hendrix.GraspMolmo: Generalizable Task-Oriented Grasping via Large-Scale Synthetic Data Generation[EB/OL].(2025-05-19)[2025-06-25].https://arxiv.org/abs/2505.13441.点此复制

评论