|国家预印本平台
首页|Weighted Average Gradients for Feature Attribution

Weighted Average Gradients for Feature Attribution

Weighted Average Gradients for Feature Attribution

来源:Arxiv_logoArxiv
英文摘要

In explainable AI, Integrated Gradients (IG) is a widely adopted technique for assessing the significance of feature attributes of the input on model outputs by evaluating contributions from a baseline input to the current input. The choice of the baseline input significantly influences the resulting explanation. While the traditional Expected Gradients (EG) method assumes baselines can be uniformly sampled and averaged with equal weights, this study argues that baselines should not be treated equivalently. We introduce Weighted Average Gradients (WG), a novel approach that unsupervisedly evaluates baseline suitability and incorporates a strategy for selecting effective baselines. Theoretical analysis demonstrates that WG satisfies essential explanation method criteria and offers greater stability than prior approaches. Experimental results further confirm that WG outperforms EG across diverse scenarios, achieving an improvement of 10-35\% on main metrics. Moreover, by evaluating baselines, our method can filter a subset of effective baselines for each input to calculate explanations, maintaining high accuracy while reducing computational cost. The code is available at: https://github.com/Tamnt240904/weighted_baseline.

Kien Tran Duc Tuan、Tam Nguyen Trong、Son Nguyen Hoang、Khoat Than、Anh Nguyen Duc

计算技术、计算机技术

Kien Tran Duc Tuan,Tam Nguyen Trong,Son Nguyen Hoang,Khoat Than,Anh Nguyen Duc.Weighted Average Gradients for Feature Attribution[EB/OL].(2025-05-06)[2025-05-24].https://arxiv.org/abs/2505.03201.点此复制

评论