Can Video LLMs Refuse to Answer? Alignment for Answerability in Video Large Language Models
Can Video LLMs Refuse to Answer? Alignment for Answerability in Video Large Language Models
In the broader context of deep learning, Multimodal Large Language Models have achieved significant breakthroughs by leveraging powerful Large Language Models as a backbone to align different modalities into the language space. A prime exemplification is the development of Video Large Language Models (Video-LLMs). While numerous advancements have been proposed to enhance the video understanding capabilities of these models, they are predominantly trained on questions generated directly from video content. However, in real-world scenarios, users often pose questions that extend beyond the informational scope of the video, highlighting the need for Video-LLMs to assess the relevance of the question. We demonstrate that even the best-performing Video-LLMs fail to reject unfit questions-not necessarily due to a lack of video understanding, but because they have not been trained to identify and refuse such questions. To address this limitation, we propose alignment for answerability, a framework that equips Video-LLMs with the ability to evaluate the relevance of a question based on the input video and appropriately decline to answer when the question exceeds the scope of the video, as well as an evaluation framework with a comprehensive set of metrics designed to measure model behavior before and after alignment. Furthermore, we present a pipeline for creating a dataset specifically tailored for alignment for answerability, leveraging existing video-description paired datasets.
Eunseop Yoon、Hee Suk Yoon、Mark A. Hasegawa-Johnson、Chang D. Yoo
计算技术、计算机技术
Eunseop Yoon,Hee Suk Yoon,Mark A. Hasegawa-Johnson,Chang D. Yoo.Can Video LLMs Refuse to Answer? Alignment for Answerability in Video Large Language Models[EB/OL].(2025-07-07)[2025-07-16].https://arxiv.org/abs/2507.04976.点此复制
评论