MUPA: Towards Multi-Path Agentic Reasoning for Grounded Video Question Answering
MUPA: Towards Multi-Path Agentic Reasoning for Grounded Video Question Answering
Grounded Video Question Answering (Grounded VideoQA) requires aligning textual answers with explicit visual evidence. However, modern multimodal models often rely on linguistic priors and spurious correlations, resulting in poorly grounded predictions. In this work, we propose MUPA, a cooperative MUlti-Path Agentic approach that unifies video grounding, question answering, answer reflection and aggregation to tackle Grounded VideoQA. MUPA features three distinct reasoning paths on the interplay of grounding and QA agents in different chronological orders, along with a dedicated reflection agent to judge and aggregate the multi-path results to accomplish consistent QA and grounding. This design markedly improves grounding fidelity without sacrificing answer accuracy. Despite using only 2B parameters, our method outperforms all 7B-scale competitors. When scaled to 7B parameters, MUPA establishes new state-of-the-art results, with Acc@GQA of 30.3% and 47.4% on NExT-GQA and DeVE-QA respectively, demonstrating MUPA' effectiveness towards trustworthy video-language understanding. Our code is available in https://github.com/longmalongma/MUPA.
Huilin Song、Junbin Xiao、Bimei Wang、Han Peng、Haoxuan Li、Xun Yang、Meng Wang、Tat-Seng Chua、Jisheng Dang
计算技术、计算机技术
Huilin Song,Junbin Xiao,Bimei Wang,Han Peng,Haoxuan Li,Xun Yang,Meng Wang,Tat-Seng Chua,Jisheng Dang.MUPA: Towards Multi-Path Agentic Reasoning for Grounded Video Question Answering[EB/OL].(2025-06-27)[2025-07-21].https://arxiv.org/abs/2506.18071.点此复制
评论