|国家预印本平台
首页|Towards Faithful Class-level Self-explainability in Graph Neural Networks by Subgraph Dependencies

Towards Faithful Class-level Self-explainability in Graph Neural Networks by Subgraph Dependencies

Towards Faithful Class-level Self-explainability in Graph Neural Networks by Subgraph Dependencies

来源:Arxiv_logoArxiv
英文摘要

Enhancing the interpretability of graph neural networks (GNNs) is crucial to ensure their safe and fair deployment. Recent work has introduced self-explainable GNNs that generate explanations as part of training, improving both faithfulness and efficiency. Some of these models, such as ProtGNN and PGIB, learn class-specific prototypes, offering a potential pathway toward class-level explanations. However, their evaluations focus solely on instance-level explanations, leaving open the question of whether these prototypes meaningfully generalize across instances of the same class. In this paper, we introduce GraphOracle, a novel self-explainable GNN framework designed to generate and evaluate class-level explanations for GNNs. Our model jointly learns a GNN classifier and a set of structured, sparse subgraphs that are discriminative for each class. We propose a novel integrated training that captures graph$\unicode{x2013}$subgraph$\unicode{x2013}$prediction dependencies efficiently and faithfully, validated through a masking-based evaluation strategy. This strategy enables us to retroactively assess whether prior methods like ProtGNN and PGIB deliver effective class-level explanations. Our results show that they do not. In contrast, GraphOracle achieves superior fidelity, explainability, and scalability across a range of graph classification tasks. We further demonstrate that GraphOracle avoids the computational bottlenecks of previous methods$\unicode{x2014}$like Monte Carlo Tree Search$\unicode{x2014}$by using entropy-regularized subgraph selection and lightweight random walk extraction, enabling faster and more scalable training. These findings position GraphOracle as a practical and principled solution for faithful class-level self-explainability in GNNs.

Fanzhen Liu、Xiaoxiao Ma、Jian Yang、Alsharif Abuadbba、Kristen Moore、Surya Nepal、Cecile Paris、Quan Z. Sheng、Jia Wu

计算技术、计算机技术

Fanzhen Liu,Xiaoxiao Ma,Jian Yang,Alsharif Abuadbba,Kristen Moore,Surya Nepal,Cecile Paris,Quan Z. Sheng,Jia Wu.Towards Faithful Class-level Self-explainability in Graph Neural Networks by Subgraph Dependencies[EB/OL].(2025-08-15)[2025-08-28].https://arxiv.org/abs/2508.11513.点此复制

评论