|国家预印本平台
首页|AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings

AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings

AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings

来源:Arxiv_logoArxiv
英文摘要

Cross-modal embeddings form the foundation for multi-modal models. However, visualization methods for interpreting cross-modal embeddings have been primarily confined to traditional dimensionality reduction (DR) techniques like PCA and t-SNE. These DR methods primarily focus on feature distributions within a single modality, whilst failing to incorporate metrics (e.g., CLIPScore) across multiple modalities. This paper introduces AKRMap, a new DR technique designed to visualize cross-modal embeddings metric with enhanced accuracy by learning kernel regression of the metric landscape in the projection space. Specifically, AKRMap constructs a supervised projection network guided by a post-projection kernel regression loss, and employs adaptive generalized kernels that can be jointly optimized with the projection. This approach enables AKRMap to efficiently generate visualizations that capture complex metric distributions, while also supporting interactive features such as zoom and overlay for deeper exploration. Quantitative experiments demonstrate that AKRMap outperforms existing DR methods in generating more accurate and trustworthy visualizations. We further showcase the effectiveness of AKRMap in visualizing and comparing cross-modal embeddings for text-to-image models. Code and demo are available at https://github.com/yilinye/AKRMap.

Yilin Ye、Junchao Huang、Xingchen Zeng、Jiazhi Xia、Wei Zeng

计算技术、计算机技术

Yilin Ye,Junchao Huang,Xingchen Zeng,Jiazhi Xia,Wei Zeng.AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings[EB/OL].(2025-05-20)[2025-06-10].https://arxiv.org/abs/2505.14664.点此复制

评论