CogStream: Context-guided Streaming Video Question Answering
CogStream: Context-guided Streaming Video Question Answering
Despite advancements in Video Large Language Models (Vid-LLMs) improving multimodal understanding, challenges persist in streaming video reasoning due to its reliance on contextual information. Existing paradigms feed all available historical contextual information into Vid-LLMs, resulting in a significant computational burden for visual data processing. Furthermore, the inclusion of irrelevant context distracts models from key details. This paper introduces a challenging task called Context-guided Streaming Video Reasoning (CogStream), which simulates real-world streaming video scenarios, requiring models to identify the most relevant historical contextual information to deduce answers for questions about the current stream. To support CogStream, we present a densely annotated dataset featuring extensive and hierarchical question-answer pairs, generated by a semi-automatic pipeline. Additionally, we present CogReasoner as a baseline model. It efficiently tackles this task by leveraging visual stream compression and historical dialogue retrieval. Extensive experiments prove the effectiveness of this method. Code will be released soon.
Huabin Liu、Zicheng Zhao、Kangyu Wang、Shijie Li、Rui Qian、Weiyao Lin
计算技术、计算机技术
Huabin Liu,Zicheng Zhao,Kangyu Wang,Shijie Li,Rui Qian,Weiyao Lin.CogStream: Context-guided Streaming Video Question Answering[EB/OL].(2025-06-12)[2025-06-21].https://arxiv.org/abs/2506.10516.点此复制
评论