Enhancing Interpretable Image Classification Through LLM Agents and Conditional Concept Bottleneck Models
Enhancing Interpretable Image Classification Through LLM Agents and Conditional Concept Bottleneck Models
Concept Bottleneck Models (CBMs) decompose image classification into a process governed by interpretable, human-readable concepts. Recent advances in CBMs have used Large Language Models (LLMs) to generate candidate concepts. However, a critical question remains: What is the optimal number of concepts to use? Current concept banks suffer from redundancy or insufficient coverage. To address this issue, we introduce a dynamic, agent-based approach that adjusts the concept bank in response to environmental feedback, optimizing the number of concepts for sufficiency yet concise coverage. Moreover, we propose Conditional Concept Bottleneck Models (CoCoBMs) to overcome the limitations in traditional CBMs' concept scoring mechanisms. It enhances the accuracy of assessing each concept's contribution to classification tasks and feature an editable matrix that allows LLMs to correct concept scores that conflict with their internal knowledge. Our evaluations across 6 datasets show that our method not only improves classification accuracy by 6% but also enhances interpretability assessments by 30%.
Yiwen Jiang、Deval Mehta、Wei Feng、Zongyuan Ge
计算技术、计算机技术
Yiwen Jiang,Deval Mehta,Wei Feng,Zongyuan Ge.Enhancing Interpretable Image Classification Through LLM Agents and Conditional Concept Bottleneck Models[EB/OL].(2025-06-02)[2025-06-30].https://arxiv.org/abs/2506.01334.点此复制
评论