|国家预印本平台
首页|Ignoring Directionality Leads to Compromised Graph Neural Network Explanations

Ignoring Directionality Leads to Compromised Graph Neural Network Explanations

Ignoring Directionality Leads to Compromised Graph Neural Network Explanations

来源:Arxiv_logoArxiv
英文摘要

Graph Neural Networks (GNNs) are increasingly used in critical domains, where reliable explanations are vital for supporting human decision-making. However, the common practice of graph symmetrization discards directional information, leading to significant information loss and misleading explanations. Our analysis demonstrates how this practice compromises explanation fidelity. Through theoretical and empirical studies, we show that preserving directional semantics significantly improves explanation quality, ensuring more faithful insights for human decision-makers. These findings highlight the need for direction-aware GNN explainability in security-critical applications.

Changsheng Sun、Xinke Li、Jin Song Dong

计算技术、计算机技术

Changsheng Sun,Xinke Li,Jin Song Dong.Ignoring Directionality Leads to Compromised Graph Neural Network Explanations[EB/OL].(2025-06-04)[2025-06-14].https://arxiv.org/abs/2506.04608.点此复制

评论