|国家预印本平台
首页|BACON: A fully explainable AI model with graded logic for decision making problems

BACON: A fully explainable AI model with graded logic for decision making problems

BACON: A fully explainable AI model with graded logic for decision making problems

来源:Arxiv_logoArxiv
英文摘要

As machine learning models and autonomous agents are increasingly deployed in high-stakes, real-world domains such as healthcare, security, finance, and robotics, the need for transparent and trustworthy explanations has become critical. To ensure end-to-end transparency of AI decisions, we need models that are not only accurate but also fully explainable and human-tunable. We introduce BACON, a novel framework for automatically training explainable AI models for decision making problems using graded logic. BACON achieves high predictive accuracy while offering full structural transparency and precise, logic-based symbolic explanations, enabling effective human-AI collaboration and expert-guided refinement. We evaluate BACON with a diverse set of scenarios: classic Boolean approximation, Iris flower classification, house purchasing decisions and breast cancer diagnosis. In each case, BACON provides high-performance models while producing compact, human-verifiable decision logic. These results demonstrate BACON's potential as a practical and principled approach for delivering crisp, trustworthy explainable AI.

Haishi Bai、Jozo Dujmovic、Jianwu Wang

计算技术、计算机技术

Haishi Bai,Jozo Dujmovic,Jianwu Wang.BACON: A fully explainable AI model with graded logic for decision making problems[EB/OL].(2025-05-20)[2025-06-23].https://arxiv.org/abs/2505.14510.点此复制

评论