Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness
Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness
Disaggregated evaluation across subgroups is critical for assessing the fairness of machine learning models, but its uncritical use can mislead practitioners. We show that equal performance across subgroups is an unreliable measure of fairness when data are representative of the relevant populations but reflective of real-world disparities. Furthermore, when data are not representative due to selection bias, both disaggregated evaluation and alternative approaches based on conditional independence testing may be invalid without explicit assumptions regarding the bias mechanism. We use causal graphical models to predict metric stability across subgroups under different data generating processes. Our framework suggests complementing disaggregated evaluations with explicit causal assumptions and analysis to control for confounding and distribution shift, including conditional independence testing and weighted performance estimation. These findings have broad implications for how practitioners design and interpret model assessments given the ubiquity of disaggregated evaluation.
Stephen R. Pfohl、Natalie Harris、Chirag Nagpal、David Madras、Vishwali Mhasawade、Olawale Salaudeen、Awa Dieng、Shannon Sequeira、Santiago Arciniegas、Lillian Sung、Nnamdi Ezeanochie、Heather Cole-Lewis、Katherine Heller、Sanmi Koyejo、Alexander D'Amour
计算技术、计算机技术
Stephen R. Pfohl,Natalie Harris,Chirag Nagpal,David Madras,Vishwali Mhasawade,Olawale Salaudeen,Awa Dieng,Shannon Sequeira,Santiago Arciniegas,Lillian Sung,Nnamdi Ezeanochie,Heather Cole-Lewis,Katherine Heller,Sanmi Koyejo,Alexander D'Amour.Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness[EB/OL].(2025-06-04)[2025-07-09].https://arxiv.org/abs/2506.04193.点此复制
评论