Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model
Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model
Humans can intuitively infer sounds from silent videos, but whether multimodal large language models can perform modal-mismatch reasoning without accessing target modalities remains relatively unexplored. Current text-assisted-video-to-audio (VT2A) methods excel in video foley tasks but struggle to acquire audio descriptions during inference. We introduce the task of Reasoning Audio Descriptions from Silent Videos (SVAD) to address this challenge and investigate vision-language models' (VLMs) capabilities on this task. To further enhance the VLMs' reasoning capacity for the SVAD task, we construct a CoT-AudioCaps dataset and propose a Chain-of-Thought-based supervised fine-tuning strategy. Experiments on SVAD and subsequent VT2A tasks demonstrate our method's effectiveness in two key aspects: significantly improving VLMs' modal-mismatch reasoning for SVAD and effectively addressing the challenge of acquiring audio descriptions during VT2A inference.
Yong Ren、Chenxing Li、Le Xu、Hao Gu、Duzhen Zhang、Yujie Chen、Manjie Xu、Ruibo Fu、Shan Yang、Dong Yu
计算技术、计算机技术
Yong Ren,Chenxing Li,Le Xu,Hao Gu,Duzhen Zhang,Yujie Chen,Manjie Xu,Ruibo Fu,Shan Yang,Dong Yu.Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model[EB/OL].(2025-05-19)[2025-06-05].https://arxiv.org/abs/2505.13062.点此复制
评论