|国家预印本平台
首页|SocialGesture: Delving into Multi-person Gesture Understanding

SocialGesture: Delving into Multi-person Gesture Understanding

SocialGesture: Delving into Multi-person Gesture Understanding

来源:Arxiv_logoArxiv
英文摘要

Previous research in human gesture recognition has largely overlooked multi-person interactions, which are crucial for understanding the social context of naturally occurring gestures. This limitation in existing datasets presents a significant challenge in aligning human gestures with other modalities like language and speech. To address this issue, we introduce SocialGesture, the first large-scale dataset specifically designed for multi-person gesture analysis. SocialGesture features a diverse range of natural scenarios and supports multiple gesture analysis tasks, including video-based recognition and temporal localization, providing a valuable resource for advancing the study of gesture during complex social interactions. Furthermore, we propose a novel visual question answering (VQA) task to benchmark vision language models'(VLMs) performance on social gesture understanding. Our findings highlight several limitations of current gesture recognition models, offering insights into future directions for improvement in this field. SocialGesture is available at huggingface.co/datasets/IrohXu/SocialGesture.

Xu Cao、Pranav Virupaksha、Wenqi Jia、Bolin Lai、Fiona Ryan、Sangmin Lee、James M. Rehg

计算技术、计算机技术

Xu Cao,Pranav Virupaksha,Wenqi Jia,Bolin Lai,Fiona Ryan,Sangmin Lee,James M. Rehg.SocialGesture: Delving into Multi-person Gesture Understanding[EB/OL].(2025-04-02)[2025-04-30].https://arxiv.org/abs/2504.02244.点此复制

评论