|国家预印本平台
首页|Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation

Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation

Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation

来源:Arxiv_logoArxiv
英文摘要

Vision-Language Models (VLMs) often struggle to balance visual and textual information when summarizing complex multimodal inputs, such as entire TV show episodes. In this paper, we propose a zero-shot video-to-text summarization approach that builds its own screenplay representation of an episode, effectively integrating key video moments, dialogue, and character information into a unified document. Unlike previous approaches, we simultaneously generate screenplays and name the characters in zero-shot, using only the audio, video, and transcripts as input. Additionally, we highlight that existing summarization metrics can fail to assess the multimodal content in summaries. To address this, we introduce MFactSum, a multimodal metric that evaluates summaries with respect to both vision and text modalities. Using MFactSum, we evaluate our screenplay summaries on the SummScreen3D dataset, demonstrating superiority against state-of-the-art VLMs such as Gemini 1.5 by generating summaries containing 20% more relevant visual information while requiring 75% less of the video as input.

Galann Pennec、Zhengyuan Liu、Nicholas Asher、Philippe Muller、Nancy F. Chen

计算技术、计算机技术

Galann Pennec,Zhengyuan Liu,Nicholas Asher,Philippe Muller,Nancy F. Chen.Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation[EB/OL].(2025-05-10)[2025-06-06].https://arxiv.org/abs/2505.06594.点此复制

评论