|国家预印本平台
首页|SHAP-based Explanations are Sensitive to Feature Representation

SHAP-based Explanations are Sensitive to Feature Representation

SHAP-based Explanations are Sensitive to Feature Representation

来源:Arxiv_logoArxiv
英文摘要

Local feature-based explanations are a key component of the XAI toolkit. These explanations compute feature importance values relative to an ``interpretable'' feature representation. In tabular data, feature values themselves are often considered interpretable. This paper examines the impact of data engineering choices on local feature-based explanations. We demonstrate that simple, common data engineering techniques, such as representing age with a histogram or encoding race in a specific way, can manipulate feature importance as determined by popular methods like SHAP. Notably, the sensitivity of explanations to feature representation can be exploited by adversaries to obscure issues like discrimination. While the intuition behind these results is straightforward, their systematic exploration has been lacking. Previous work has focused on adversarial attacks on feature-based explainers by biasing data or manipulating models. To the best of our knowledge, this is the first study demonstrating that explainers can be misled by standard, seemingly innocuous data engineering techniques.

Hyunseung Hwang、Andrew Bell、Joao Fonseca、Venetia Pliatsika、Julia Stoyanovich、Steven Euijong Whang

计算技术、计算机技术

Hyunseung Hwang,Andrew Bell,Joao Fonseca,Venetia Pliatsika,Julia Stoyanovich,Steven Euijong Whang.SHAP-based Explanations are Sensitive to Feature Representation[EB/OL].(2025-05-13)[2025-06-12].https://arxiv.org/abs/2505.08345.点此复制

评论