|国家预印本平台
首页|MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems

MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems

MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems

来源:Arxiv_logoArxiv
英文摘要

Explanations are crucial for building trustworthy AI systems, but a gap often exists between the explanations provided by models and those needed by users. To address this gap, we introduce MetaExplainer, a neuro-symbolic framework designed to generate user-centered explanations. Our approach employs a three-stage process: first, we decompose user questions into machine-readable formats using state-of-the-art large language models (LLM); second, we delegate the task of generating system recommendations to model explainer methods; and finally, we synthesize natural language explanations that summarize the explainer outputs. Throughout this process, we utilize an Explanation Ontology to guide the language models and explainer methods. By leveraging LLMs and a structured approach to explanation generation, MetaExplainer aims to enhance the interpretability and trustworthiness of AI systems across various applications, providing users with tailored, question-driven explanations that better meet their needs. Comprehensive evaluations of MetaExplainer demonstrate a step towards evaluating and utilizing current state-of-the-art explanation frameworks. Our results show high performance across all stages, with a 59.06% F1-score in question reframing, 70% faithfulness in model explanations, and 67% context-utilization in natural language synthesis. User studies corroborate these findings, highlighting the creativity and comprehensiveness of generated explanations. Tested on the Diabetes (PIMA Indian) tabular dataset, MetaExplainer supports diverse explanation types, including Contrastive, Counterfactual, Rationale, Case-Based, and Data explanations. The framework's versatility and traceability from using ontology to guide LLMs suggest broad applicability beyond the tested scenarios, positioning MetaExplainer as a promising tool for enhancing AI explainability across various domains.

Shruthi Chari、Oshani Seneviratne、Prithwish Chakraborty、Pablo Meyer、Deborah L. McGuinness

计算技术、计算机技术

Shruthi Chari,Oshani Seneviratne,Prithwish Chakraborty,Pablo Meyer,Deborah L. McGuinness.MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00300.点此复制

评论