|国家预印本平台
首页|VideoDeepResearch: Long Video Understanding With Agentic Tool Using

VideoDeepResearch: Long Video Understanding With Agentic Tool Using

VideoDeepResearch: Long Video Understanding With Agentic Tool Using

来源:Arxiv_logoArxiv
英文摘要

Long video understanding (LVU) presents a significant challenge for current multi-modal large language models (MLLMs) due to the task's inherent complexity and context window constraint. It is widely assumed that addressing LVU tasks requires foundation MLLMs with extended context windows, strong visual perception capabilities, and proficient domain expertise. In this work, we challenge this common belief by introducing VideoDeepResearch, a novel agentic framework for long video understanding. Our approach relies solely on a text-only large reasoning model (LRM) combined with a modular multi-modal toolkit, including multimodal retrievers and visual perceivers, all of which are readily available in practice. For each LVU task, the system formulates a problem-solving strategy through reasoning, while selectively accessing and utilizing essential video content via tool using. We conduct extensive experiments on popular LVU benchmarks, including MLVU, Video-MME, and LVBench. Our results demonstrate that VideoDeepResearch achieves substantial improvements over existing MLLM baselines, surpassing the previous state-of-the-art by 9.6%, 6.6%, and 3.9% on MLVU (test), LVBench, and LongVideoBench, respectively. These findings highlight the promise of agentic systems in overcoming key challenges in LVU problems.

Zheng Liu、Junjie Zhou、Hongjin Qian、Ji-Rong Wen、Zhicheng Dou、Huaying Yuan

计算技术、计算机技术

Zheng Liu,Junjie Zhou,Hongjin Qian,Ji-Rong Wen,Zhicheng Dou,Huaying Yuan.VideoDeepResearch: Long Video Understanding With Agentic Tool Using[EB/OL].(2025-06-12)[2025-07-16].https://arxiv.org/abs/2506.10821.点此复制

评论