|国家预印本平台
首页|Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework

Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework

Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework

来源:Arxiv_logoArxiv
英文摘要

Data-driven AI is establishing itself at the center of evidence-based medicine. However, reports of shortcomings and unexpected behavior are growing due to AI's reliance on association-based learning. A major reason for this behavior: latent bias in machine learning datasets can be amplified during training and/or hidden during testing. We present a data modality-agnostic auditing framework for generating targeted hypotheses about sources of bias which we refer to as Generalized Attribute Utility and Detectability-Induced bias Testing (G-AUDIT) for datasets. Our method examines the relationship between task-level annotations and data properties including protected attributes (e.g., race, age, sex) and environment and acquisition characteristics (e.g., clinical site, imaging protocols). G-AUDIT automatically quantifies the extent to which the observed data attributes may enable shortcut learning, or in the case of testing data, hide predictions made based on spurious associations. We demonstrate the broad applicability and value of our method by analyzing large-scale medical datasets for three distinct modalities and learning tasks: skin lesion classification in images, stigmatizing language classification in Electronic Health Records (EHR), and mortality prediction for ICU tabular data. In each setting, G-AUDIT successfully identifies subtle biases commonly overlooked by traditional qualitative methods that focus primarily on social and ethical objectives, underscoring its practical value in exposing dataset-level risks and supporting the downstream development of reliable AI systems. Our method paves the way for achieving deeper understanding of machine learning datasets throughout the AI development life-cycle from initial prototyping all the way to regulation, and creates opportunities to reduce model bias, enabling safer and more trustworthy AI systems.

Mathias Unberath、Adarsh Subbaswamy、Mitchell Pavlak、Keith Harrigian、Ayah Zirikly、Nathan Drenkow

医学现状、医学发展医学研究方法

Mathias Unberath,Adarsh Subbaswamy,Mitchell Pavlak,Keith Harrigian,Ayah Zirikly,Nathan Drenkow.Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework[EB/OL].(2025-03-12)[2025-05-04].https://arxiv.org/abs/2503.09969.点此复制

评论