|国家预印本平台
首页|Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders

来源:Arxiv_logoArxiv
英文摘要

Interpretability is critical in high-stakes domains such as medical imaging, where understanding model decisions is essential for clinical adoption. In this work, we introduce Sparse Autoencoder (SAE)-based interpretability to breast imaging by analyzing {Mammo-CLIP}, a vision--language foundation model pretrained on large-scale mammogram image--report pairs. We train a patch-level \texttt{Mammo-SAE} on Mammo-CLIP to identify and probe latent features associated with clinically relevant breast concepts such as \textit{mass} and \textit{suspicious calcification}. Our findings reveal that top activated class level latent neurons in the SAE latent space often tend to align with ground truth regions, and also uncover several confounding factors influencing the model's decision-making process. Additionally, we analyze which latent neurons the model relies on during downstream finetuning for improving the breast concept prediction. This study highlights the promise of interpretable SAE latent representations in providing deeper insight into the internal workings of foundation models at every layer for breast imaging.

Krishna Kanth Nakka

医学研究方法肿瘤学

Krishna Kanth Nakka.Mammo-SAE: Interpreting Breast Cancer Concept Learning with Sparse Autoencoders[EB/OL].(2025-07-21)[2025-08-10].https://arxiv.org/abs/2507.15227.点此复制

评论