|国家预印本平台
首页|Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models

Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models

Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models

来源:Arxiv_logoArxiv
英文摘要

Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of layers and exiting early can increase the chance of the VLM generating harmful responses. We call it as "Image enCoder Early-exiT" based vulnerability (ICET). Our experiments across three VLMs: LLaVA-1.5, LLaVA-NeXT, and Llama 3.2, show that performing early exits from the image encoder significantly increases the likelihood of generating harmful outputs. To tackle this, we propose a simple yet effective modification of the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm for performing layer-wise multi-modal RLHF for VLMs. We term this as Layer-Wise PPO (L-PPO). We evaluate our L-PPO algorithm across three multimodal datasets and show that it consistently reduces the harmfulness caused by early exits.

Rohit Lal、Erfan Shayegani、Chengyu Song、Nael Abu-Ghazaleh、Arindam Dutta、Trishna Chakraborty、Yue Dong、Saketh Bachu、Amit K. Roy-Chowdhury

计算技术、计算机技术

Rohit Lal,Erfan Shayegani,Chengyu Song,Nael Abu-Ghazaleh,Arindam Dutta,Trishna Chakraborty,Yue Dong,Saketh Bachu,Amit K. Roy-Chowdhury.Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models[EB/OL].(2025-06-19)[2025-07-25].https://arxiv.org/abs/2411.04291.点此复制

评论