Interpretable Hierarchical Concept Reasoning through Attention-Guided Graph Learning
Interpretable Hierarchical Concept Reasoning through Attention-Guided Graph Learning
Concept-Based Models (CBMs) are a class of deep learning models that provide interpretability by explaining predictions through high-level concepts. These models first predict concepts and then use them to perform a downstream task. However, current CBMs offer interpretability only for the final task prediction, while the concept predictions themselves are typically made via black-box neural networks. To address this limitation, we propose Hierarchical Concept Memory Reasoner (H-CMR), a new CBM that provides interpretability for both concept and task predictions. H-CMR models relationships between concepts using a learned directed acyclic graph, where edges represent logic rules that define concepts in terms of other concepts. During inference, H-CMR employs a neural attention mechanism to select a subset of these rules, which are then applied hierarchically to predict all concepts and the final task. Experimental results demonstrate that H-CMR matches state-of-the-art performance while enabling strong human interaction through concept and model interventions. The former can significantly improve accuracy at inference time, while the latter can enhance data efficiency during training when background knowledge is available.
David Debot、Pietro Barbiero、Gabriele Dominici、Giuseppe Marra
计算技术、计算机技术
David Debot,Pietro Barbiero,Gabriele Dominici,Giuseppe Marra.Interpretable Hierarchical Concept Reasoning through Attention-Guided Graph Learning[EB/OL].(2025-06-26)[2025-07-16].https://arxiv.org/abs/2506.21102.点此复制
评论