|国家预印本平台
首页|Robust Fairness Vision-Language Learning for Medical Image Analysis

Robust Fairness Vision-Language Learning for Medical Image Analysis

Robust Fairness Vision-Language Learning for Medical Image Analysis

来源:Arxiv_logoArxiv
英文摘要

The advent of Vision-Language Models (VLMs) in medical image analysis has the potential to help process multimodal inputs and increase performance over traditional inference methods. However, when considering the domain in which these models will be implemented, fairness and robustness are important to ensure the model stays true for any patient. In this paper, we introduce a framework for ensuring robustness and fairness of VLM models. This framework modifies the loss function at training by identifying and adjusting faulty image-text pairs through a Dynamic Bad Pair Mining algorithm and also utilizing Sinkhorn distance to ensure the loss distributions of protected groups do not deviate from the total loss. Experimental testing of our framework shows up to a 8.6\% improvement when looking at equity-scaled AUC.

Sparsh Bansal、Mingyang Wu、Xin Wang、Shu Hu

医学研究方法

Sparsh Bansal,Mingyang Wu,Xin Wang,Shu Hu.Robust Fairness Vision-Language Learning for Medical Image Analysis[EB/OL].(2025-05-05)[2025-07-01].https://arxiv.org/abs/2505.03153.点此复制

评论