|国家预印本平台
首页|MedViT V2: Medical Image Classification with KAN-Integrated Transformers and Dilated Neighborhood Attention

MedViT V2: Medical Image Classification with KAN-Integrated Transformers and Dilated Neighborhood Attention

MedViT V2: Medical Image Classification with KAN-Integrated Transformers and Dilated Neighborhood Attention

来源:Arxiv_logoArxiv
英文摘要

Convolutional networks, transformers, hybrid models, and Mamba-based architectures have demonstrated strong performance across various medical image classification tasks. However, these methods were primarily designed to classify clean images using labeled data. In contrast, real-world clinical data often involve image corruptions that are unique to multi-center studies and stem from variations in imaging equipment across manufacturers. In this paper, we introduce the Medical Vision Transformer (MedViTV2), a novel architecture incorporating Kolmogorov-Arnold Network (KAN) layers into the transformer architecture for the first time, aiming for generalized medical image classification. We have developed an efficient KAN block to reduce computational load while enhancing the accuracy of the original MedViT. Additionally, to counteract the fragility of our MedViT when scaled up, we propose an enhanced Dilated Neighborhood Attention (DiNA), an adaptation of the efficient fused dot-product attention kernel capable of capturing global context and expanding receptive fields to scale the model effectively and addressing feature collapse issues. Moreover, a hierarchical hybrid strategy is introduced to stack our Local Feature Perception and Global Feature Perception blocks in an efficient manner, which balances local and global feature perceptions to boost performance. Extensive experiments on 17 medical image classification datasets and 12 corrupted medical image datasets demonstrate that MedViTV2 achieved state-of-the-art results in 27 out of 29 experiments with reduced computational complexity. MedViTV2 is 44\% more computationally efficient than the previous version and significantly enhances accuracy, achieving improvements of 4.6\% on MedMNIST, 5.8\% on NonMNIST, and 13.4\% on the MedMNIST-C benchmark.

Hojat Asgariandehkordi、Omid Nejati Manzari、Taha Koleilat、Hassan Rivaz、Yiming Xiao

医学研究方法

Hojat Asgariandehkordi,Omid Nejati Manzari,Taha Koleilat,Hassan Rivaz,Yiming Xiao.MedViT V2: Medical Image Classification with KAN-Integrated Transformers and Dilated Neighborhood Attention[EB/OL].(2025-07-29)[2025-08-15].https://arxiv.org/abs/2502.13693.点此复制

评论