|国家预印本平台
首页|Object-centric Video Question Answering with Visual Grounding and Referring

Object-centric Video Question Answering with Visual Grounding and Referring

Object-centric Video Question Answering with Visual Grounding and Referring

来源:Arxiv_logoArxiv
英文摘要

Video Large Language Models (VideoLLMs) have recently demonstrated remarkable progress in general video understanding. However, existing models primarily focus on high-level comprehension and are limited to text-only responses, restricting the flexibility for object-centric, multiround interactions. In this paper, we make three contributions: (i) we address these limitations by introducing a VideoLLM model, capable of performing both object referring for input and grounding for output in video reasoning tasks, i.e., allowing users to interact with videos using both textual and visual prompts; (ii) we propose STOM (Spatial-Temporal Overlay Module), a novel approach that propagates arbitrary visual prompts input at any single timestamp to the remaining frames within a video; (iii) we present VideoInfer, a manually curated object-centric video instruction dataset featuring questionanswering pairs that require reasoning. We conduct comprehensive experiments on VideoInfer and other existing benchmarks across video question answering and referring object segmentation. The results on 12 benchmarks of 6 tasks show that our proposed model consistently outperforms baselines in both video question answering and segmentation, underscoring its robustness in multimodal, object-centric video and image understanding. Project page: https://qirui-chen.github.io/RGA3-release/.

Haochen Wang、Qirui Chen、Cilin Yan、Jiayin Cai、Xiaolong Jiang、Yao Hu、Weidi Xie、Stratis Gavves

计算技术、计算机技术

Haochen Wang,Qirui Chen,Cilin Yan,Jiayin Cai,Xiaolong Jiang,Yao Hu,Weidi Xie,Stratis Gavves.Object-centric Video Question Answering with Visual Grounding and Referring[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.19599.点此复制

评论