Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM
Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM
Humans naturally understand moments in a video by integrating visual and auditory cues. For example, localizing a scene in the video like "A scientist passionately speaks on wildlife conservation as dramatic orchestral music plays, with the audience nodding and applauding" requires simultaneous processing of visual, audio, and speech signals. However, existing models often struggle to effectively fuse and interpret audio information, limiting their capacity for comprehensive video temporal understanding. To address this, we present TriSense, a triple-modality large language model designed for holistic video temporal understanding through the integration of visual, audio, and speech modalities. Central to TriSense is a Query-Based Connector that adaptively reweights modality contributions based on the input query, enabling robust performance under modality dropout and allowing flexible combinations of available inputs. To support TriSense's multimodal capabilities, we introduce TriSense-2M, a high-quality dataset of over 2 million curated samples generated via an automated pipeline powered by fine-tuned LLMs. TriSense-2M includes long-form videos and diverse modality combinations, facilitating broad generalization. Extensive experiments across multiple benchmarks demonstrate the effectiveness of TriSense and its potential to advance multimodal video analysis. Code and dataset will be publicly released.
Zinuo Li、Xian Zhang、Yongxin Guo、Mohammed Bennamoun、Farid Boussaid、Girish Dwivedi、Luqi Gong、Qiuhong Ke
计算技术、计算机技术
Zinuo Li,Xian Zhang,Yongxin Guo,Mohammed Bennamoun,Farid Boussaid,Girish Dwivedi,Luqi Gong,Qiuhong Ke.Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM[EB/OL].(2025-05-23)[2025-06-07].https://arxiv.org/abs/2505.18110.点此复制
评论