Degradation-Aware Image Enhancement via Vision-Language Classification
Degradation-Aware Image Enhancement via Vision-Language Classification
Image degradation is a prevalent issue in various real-world applications, affecting visual quality and downstream processing tasks. In this study, we propose a novel framework that employs a Vision-Language Model (VLM) to automatically classify degraded images into predefined categories. The VLM categorizes an input image into one of four degradation types: (A) super-resolution degradation (including noise, blur, and JPEG compression), (B) reflection artifacts, (C) motion blur, or (D) no visible degradation (high-quality image). Once classified, images assigned to categories A, B, or C undergo targeted restoration using dedicated models tailored for each specific degradation type. The final output is a restored image with improved visual quality. Experimental results demonstrate the effectiveness of our approach in accurately classifying image degradations and enhancing image quality through specialized restoration models. Our method presents a scalable and automated solution for real-world image enhancement tasks, leveraging the capabilities of VLMs in conjunction with state-of-the-art restoration techniques.
Jie Cai、Kangning Yang、Jiaming Ding、Lan Fu、Ling Ouyang、Jiang Li、Jinglin Shen、Zibo Meng
计算技术、计算机技术
Jie Cai,Kangning Yang,Jiaming Ding,Lan Fu,Ling Ouyang,Jiang Li,Jinglin Shen,Zibo Meng.Degradation-Aware Image Enhancement via Vision-Language Classification[EB/OL].(2025-06-05)[2025-06-28].https://arxiv.org/abs/2506.05450.点此复制
评论