Seeing the Threat: Vulnerabilities in Vision-Language Models to Adversarial Attack
Seeing the Threat: Vulnerabilities in Vision-Language Models to Adversarial Attack
Large Vision-Language Models (LVLMs) have shown remarkable capabilities across a wide range of multimodal tasks. However, their integration of visual inputs introduces expanded attack surfaces, thereby exposing them to novel security vulnerabilities. In this work, we conduct a systematic representational analysis to uncover why conventional adversarial attacks can circumvent the safety mechanisms embedded in LVLMs. We further propose a novel two stage evaluation framework for adversarial attacks on LVLMs. The first stage differentiates among instruction non compliance, outright refusal, and successful adversarial exploitation. The second stage quantifies the degree to which the model's output fulfills the harmful intent of the adversarial prompt, while categorizing refusal behavior into direct refusals, soft refusals, and partial refusals that remain inadvertently helpful. Finally, we introduce a normative schema that defines idealized model behavior when confronted with harmful prompts, offering a principled target for safety alignment in multimodal systems.
Juan Ren、Mark Dras、Usman Naseem
计算技术、计算机技术
Juan Ren,Mark Dras,Usman Naseem.Seeing the Threat: Vulnerabilities in Vision-Language Models to Adversarial Attack[EB/OL].(2025-05-28)[2025-06-07].https://arxiv.org/abs/2505.21967.点此复制
评论