|国家预印本平台
| 注册
首页|Interpretable Evaluation of AI-Generated Content with Language-Grounded Sparse Encoders

Interpretable Evaluation of AI-Generated Content with Language-Grounded Sparse Encoders

Interpretable Evaluation of AI-Generated Content with Language-Grounded Sparse Encoders

来源:Arxiv_logoArxiv
英文摘要

While the quality of AI-generated contents, such as synthetic images, has become remarkably high, current evaluation metrics provide only coarse-grained assessments, failing to identify specific strengths and weaknesses that researchers and practitioners need for model selection and development, further limiting the scientific understanding and commercial deployment of these generative models. To address this, we introduce Language-Grounded Sparse Encoders (LanSE), a novel architecture that creates interpretable evaluation metrics by identifying interpretable visual patterns and automatically describing them in natural language. Through large-scale human evaluation (more than 11,000 annotations) and large multimodal model (LMM) based analysis, LanSE demonstrates reliable capabilities to detect interpretable visual patterns in synthetic images with more than 93\% accuracy in natural images. LanSE further provides a fine-grained evaluation framework that quantifies four key dimensions of generation quality, prompt match, visual realism, physical plausibility, and content diversity. LanSE reveals nuanced model differences invisible to existing metrics, for instance, FLUX's superior physical plausibility and SDXL-medium's strong content diversity, while aligning with human judgments. By bridging interpretability with practical evaluation needs, LanSE offers all users of generative AI models a powerful tool for model selection, quality control of synthetic content, and model improvement. These capabilities directly address the need for public confidence and safety in AI-generated content, both critical for the future of generative AI applications.

Yiming Tang、Arash Lagzian、Srinivas Anumasa、Qiran Zou、Trang Nguyen、Ehsan Adeli、Ching-Yu Cheng、Yilun Du、Dianbo Liu

计算技术、计算机技术

Yiming Tang,Arash Lagzian,Srinivas Anumasa,Qiran Zou,Trang Nguyen,Ehsan Adeli,Ching-Yu Cheng,Yilun Du,Dianbo Liu.Interpretable Evaluation of AI-Generated Content with Language-Grounded Sparse Encoders[EB/OL].(2025-08-20)[2025-09-06].https://arxiv.org/abs/2508.18236.点此复制

评论