Visual Bias and Interpretability in Deep Learning for Dermatological Image Analysis
Visual Bias and Interpretability in Deep Learning for Dermatological Image Analysis
Accurate skin disease classification is a critical yet challenging task due to high inter-class similarity, intra-class variability, and complex lesion textures. While deep learning-based computer-aided diagnosis (CAD) systems have shown promise in automating dermatological assessments, their performance is highly dependent on image pre-processing and model architecture. This study proposes a deep learning framework for multi-class skin disease classification, systematically evaluating three image pre-processing techniques: standard RGB, CMY color space transformation, and Contrast Limited Adaptive Histogram Equalization (CLAHE). We benchmark the performance of pre-trained convolutional neural networks (DenseNet201, Efficient-NetB5) and transformer-based models (ViT, Swin Transformer, DinoV2 Large) using accuracy and F1-score as evaluation metrics. Results show that DinoV2 with RGB pre-processing achieves the highest accuracy (up to 93%) and F1-scores across all variants. Grad-CAM visualizations applied to RGB inputs further reveal precise lesion localization, enhancing interpretability. These findings underscore the importance of effective pre-processing and model choice in building robust and explainable CAD systems for dermatology.
Enam Ahmed Taufik、Abdullah Khondoker、Antara Firoz Parsa、Seraj Al Mahmud Mostafa
皮肤病学、性病学医学研究方法医学现状、医学发展
Enam Ahmed Taufik,Abdullah Khondoker,Antara Firoz Parsa,Seraj Al Mahmud Mostafa.Visual Bias and Interpretability in Deep Learning for Dermatological Image Analysis[EB/OL].(2025-08-06)[2025-08-24].https://arxiv.org/abs/2508.04573.点此复制
评论