Explaining How Visual, Textual and Multimodal Encoders Share Concepts
Explaining How Visual, Textual and Multimodal Encoders Share Concepts
Sparse autoencoders (SAEs) have emerged as a powerful technique for extracting human-interpretable features from neural networks activations. Previous works compared different models based on SAE-derived features but those comparisons have been restricted to models within the same modality. We propose a novel indicator allowing quantitative comparison of models across SAE features, and use it to conduct a comparative study of visual, textual and multimodal encoders. We also propose to quantify the Comparative Sharedness of individual features between different classes of models. With these two new tools, we conduct several studies on 21 encoders of the three types, with two significantly different sizes, and considering generalist and domain specific datasets. The results allow to revisit previous studies at the light of encoders trained in a multimodal context and to quantify to which extent all these models share some representations or features. They also suggest that visual features that are specific to VLMs among vision encoders are shared with text encoders, highlighting the impact of text pretraining. The code is available at https://github.com/CEA-LIST/SAEshareConcepts
Clément Cornet、Romaric Besançon、Hervé Le Borgne
计算技术、计算机技术
Clément Cornet,Romaric Besançon,Hervé Le Borgne.Explaining How Visual, Textual and Multimodal Encoders Share Concepts[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18512.点此复制
评论