|国家预印本平台
首页|Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks

Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks

Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks

来源:Arxiv_logoArxiv
英文摘要

The lack of transparency of Deep Neural Networks continues to be a limitation that severely undermines their reliability and usage in high-stakes applications. Promising approaches to overcome such limitations are Prototype-Based Self-Explainable Neural Networks (PSENNs), whose predictions rely on the similarity between the input at hand and a set of prototypical representations of the output classes, offering therefore a deep, yet transparent-by-design, architecture. In this paper, we introduce a probabilistic reformulation of PSENNs, called Prob-PSENN, which replaces point estimates for the prototypes with probability distributions over their values. This provides not only a more flexible framework for an end-to-end learning of prototypes, but can also capture the explanatory uncertainty of the model, which is a missing feature in previous approaches. In addition, since the prototypes determine both the explanation and the prediction, Prob-PSENNs allow us to detect when the model is making uninformed or uncertain predictions, and to obtain valid explanations for them. Our experiments demonstrate that Prob-PSENNs provide more meaningful and robust explanations than their non-probabilistic counterparts, while remaining competitive in terms of predictive performance, thus enhancing the explainability and reliability of the models.

Jon Vadillo、Roberto Santana、Jose A. Lozano、Marta Kwiatkowska

信息科学、信息技术自然科学研究方法计算技术、计算机技术

Jon Vadillo,Roberto Santana,Jose A. Lozano,Marta Kwiatkowska.Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks[EB/OL].(2025-07-18)[2025-08-10].https://arxiv.org/abs/2403.13740.点此复制

评论