|国家预印本平台
首页|Failures to Surface Harmful Contents in Video Large Language Models

Failures to Surface Harmful Contents in Video Large Language Models

Failures to Surface Harmful Contents in Video Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Video Large Language Models (VideoLLMs) are increasingly deployed on numerous critical applications, where users rely on auto-generated summaries while casually skimming the video stream. We show that this interaction hides a critical safety gap: if harmful content is embedded in a video, either as full-frame inserts or as small corner patches, state-of-the-art VideoLLMs rarely mention the harmful content in the output, despite its clear visibility to human viewers. A root-cause analysis reveals three compounding design flaws: (1) insufficient temporal coverage resulting from the sparse, uniformly spaced frame sampling used by most leading VideoLLMs, (2) spatial information loss introduced by aggressive token downsampling within sampled frames, and (3) encoder-decoder disconnection, whereby visual cues are only weakly utilized during text generation. Leveraging these insights, we craft three zero-query black-box attacks, aligning with these flaws in the processing pipeline. Our large-scale evaluation across five leading VideoLLMs shows that the harmfulness omission rate exceeds 90% in most cases. Even when harmful content is clearly present in all frames, these models consistently fail to identify it. These results underscore a fundamental vulnerability in current VideoLLMs' designs and highlight the urgent need for sampling strategies, token compression, and decoding mechanisms that guarantee semantic coverage rather than speed alone.

Yuxin Cao、Wei Song、Derui Wang、Jingling Xue、Jin Song Dong

计算技术、计算机技术

Yuxin Cao,Wei Song,Derui Wang,Jingling Xue,Jin Song Dong.Failures to Surface Harmful Contents in Video Large Language Models[EB/OL].(2025-08-14)[2025-08-28].https://arxiv.org/abs/2508.10974.点此复制

评论