Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities
Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities
Recent Multimodal Large Language Models (MLLMs) achieve promising performance on visual and audio benchmarks independently. However, the ability of these models to process cross-modal information synchronously remains largely unexplored. In this paper, we introduce: 1) Daily-Omni, an Audio-Visual Questioning and Answering benchmark comprising 684 videos of daily life scenarios from diverse sources, rich in both audio and visual information, and featuring 1197 multiple-choice QA pairs across 6 major tasks; 2) Daily-Omni QA Generation Pipeline, which includes automatic annotation, QA generation and QA optimization, significantly improves efficiency for human evaluation and scalability of the benchmark; 3) Daily-Omni-Agent, a training-free agent utilizing open-source Visual Language Model (VLM), Audio Language Model (ALM) and Automatic Speech Recognition (ASR) model to establish a baseline for this benchmark. The results show that current MLLMs still struggle significantly with tasks requiring audio-visual integration, but combining VLMs and ALMs with simple temporal alignment techniques can achieve substantially better performance. Codes and benchmark are available at \href{https://github.com/Lliar-liar/Daily-Omni}{https://github.com/Lliar-liar/Daily-Omni}.
Ziwei Zhou、Rui Wang、Zuxuan Wu
计算技术、计算机技术
Ziwei Zhou,Rui Wang,Zuxuan Wu.Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities[EB/OL].(2025-05-23)[2025-07-01].https://arxiv.org/abs/2505.17862.点此复制
评论