|国家预印本平台
首页|AI Alignment in Medical Imaging: Unveiling Hidden Biases Through Counterfactual Analysis

AI Alignment in Medical Imaging: Unveiling Hidden Biases Through Counterfactual Analysis

AI Alignment in Medical Imaging: Unveiling Hidden Biases Through Counterfactual Analysis

来源:Arxiv_logoArxiv
英文摘要

Machine learning (ML) systems for medical imaging have demonstrated remarkable diagnostic capabilities, but their susceptibility to biases poses significant risks, since biases may negatively impact generalization performance. In this paper, we introduce a novel statistical framework to evaluate the dependency of medical imaging ML models on sensitive attributes, such as demographics. Our method leverages the concept of counterfactual invariance, measuring the extent to which a model's predictions remain unchanged under hypothetical changes to sensitive attributes. We present a practical algorithm that combines conditional latent diffusion models with statistical hypothesis testing to identify and quantify such biases without requiring direct access to counterfactual data. Through experiments on synthetic datasets and large-scale real-world medical imaging datasets, including \textsc{cheXpert} and MIMIC-CXR, we demonstrate that our approach aligns closely with counterfactual fairness principles and outperforms standard baselines. This work provides a robust tool to ensure that ML diagnostic systems generalize well, e.g., across demographic groups, offering a critical step towards AI safety in healthcare. Code: https://github.com/Neferpitou3871/AI-Alignment-Medical-Imaging.

Haroui Ma、Francesco Quinzan、Theresa Willem、Stefan Bauer

医学研究方法医药卫生理论

Haroui Ma,Francesco Quinzan,Theresa Willem,Stefan Bauer.AI Alignment in Medical Imaging: Unveiling Hidden Biases Through Counterfactual Analysis[EB/OL].(2025-04-28)[2025-06-04].https://arxiv.org/abs/2504.19621.点此复制

评论