|国家预印本平台
首页|DCBM: Data-Efficient Visual Concept Bottleneck Models

DCBM: Data-Efficient Visual Concept Bottleneck Models

DCBM: Data-Efficient Visual Concept Bottleneck Models

来源:Arxiv_logoArxiv
英文摘要

Concept Bottleneck Models (CBMs) enhance the interpretability of neural networks by basing predictions on human-understandable concepts. However, current CBMs typically rely on concept sets extracted from large language models or extensive image corpora, limiting their effectiveness in data-sparse scenarios. We propose Data-efficient CBMs (DCBMs), which reduce the need for large sample sizes during concept generation while preserving interpretability. DCBMs define concepts as image regions detected by segmentation or detection foundation models, allowing each image to generate multiple concepts across different granularities. This removes reliance on textual descriptions and large-scale pre-training, making DCBMs applicable for fine-grained classification and out-of-distribution tasks. Attribution analysis using Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized in test images. By leveraging dataset-specific concepts instead of predefined ones, DCBMs enhance adaptability to new domains.

Margret Keuper、Katharina Prasse、Patrick Knab、Sascha Marton、Christian Bartelt

计算技术、计算机技术

Margret Keuper,Katharina Prasse,Patrick Knab,Sascha Marton,Christian Bartelt.DCBM: Data-Efficient Visual Concept Bottleneck Models[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2412.11576.点此复制

评论