|国家预印本平台
首页|Ges3ViG: Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference Understanding

Ges3ViG: Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference Understanding

Ges3ViG: Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference Understanding

来源:Arxiv_logoArxiv
英文摘要

3-Dimensional Embodied Reference Understanding (3D-ERU) combines a language description and an accompanying pointing gesture to identify the most relevant target object in a 3D scene. Although prior work has explored pure language-based 3D grounding, there has been limited exploration of 3D-ERU, which also incorporates human pointing gestures. To address this gap, we introduce a data augmentation framework-Imputer, and use it to curate a new benchmark dataset-ImputeRefer for 3D-ERU, by incorporating human pointing gestures into existing 3D scene datasets that only contain language instructions. We also propose Ges3ViG, a novel model for 3D-ERU that achieves ~30% improvement in accuracy as compared to other 3D-ERU models and ~9% compared to other purely language-based 3D grounding models. Our code and dataset are available at https://github.com/AtharvMane/Ges3ViG.

Atharv Mahesh Mane、Dulanga Weerakoon、Vigneshwaran Subbaraju、Sougata Sen、Sanjay E. Sarma、Archan Misra

计算技术、计算机技术

Atharv Mahesh Mane,Dulanga Weerakoon,Vigneshwaran Subbaraju,Sougata Sen,Sanjay E. Sarma,Archan Misra.Ges3ViG: Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference Understanding[EB/OL].(2025-04-13)[2025-05-02].https://arxiv.org/abs/2504.09623.点此复制

评论