Line of Sight: On Linear Representations in VLLMs
Line of Sight: On Linear Representations in VLLMs
Language models can be equipped with multimodal capabilities by fine-tuning on embeddings of visual inputs. But how do such multimodal models represent images in their hidden activations? We explore representations of image concepts within LlaVA-Next, a popular open-source VLLM. We find a diverse set of ImageNet classes represented via linearly decodable features in the residual stream. We show that the features are causal by performing targeted edits on the model output. In order to increase the diversity of the studied linear features, we train multimodal Sparse Autoencoders (SAEs), creating a highly interpretable dictionary of text and image features. We find that although model representations across modalities are quite disjoint, they become increasingly shared in deeper layers.
Achyuta Rajaram、Sarah Schwettmann、Jacob Andreas、Arthur Conmy
计算技术、计算机技术
Achyuta Rajaram,Sarah Schwettmann,Jacob Andreas,Arthur Conmy.Line of Sight: On Linear Representations in VLLMs[EB/OL].(2025-06-05)[2025-06-27].https://arxiv.org/abs/2506.04706.点此复制
评论