Uncovering the Structure of Explanation Quality with Spectral Analysis
Uncovering the Structure of Explanation Quality with Spectral Analysis
As machine learning models are increasingly considered for high-stakes domains, effective explanation methods are crucial to ensure that their prediction strategies are transparent to the user. Over the years, numerous metrics have been proposed to assess quality of explanations. However, their practical applicability remains unclear, in particular due to a limited understanding of which specific aspects each metric rewards. In this paper we propose a new framework based on spectral analysis of explanation outcomes to systematically capture the multifaceted properties of different explanation techniques. Our analysis uncovers two distinct factors of explanation quality-stability and target sensitivity-that can be directly observed through spectral decomposition. Experiments on both MNIST and ImageNet show that popular evaluation techniques (e.g., pixel-flipping, entropy) partially capture the trade-offs between these factors. Overall, our framework provides a foundational basis for understanding explanation quality, guiding the development of more reliable techniques for evaluating explanations.
Johannes Mae?、Grégoire Montavon、Shinichi Nakajima、Klaus-Robert Müller、Thomas Schnake
计算技术、计算机技术
Johannes Mae?,Grégoire Montavon,Shinichi Nakajima,Klaus-Robert Müller,Thomas Schnake.Uncovering the Structure of Explanation Quality with Spectral Analysis[EB/OL].(2025-04-11)[2025-04-29].https://arxiv.org/abs/2504.08553.点此复制
评论