|国家预印本平台
首页|Subgroups Matter for Robust Bias Mitigation

Subgroups Matter for Robust Bias Mitigation

Subgroups Matter for Robust Bias Mitigation

来源:Arxiv_logoArxiv
英文摘要

Despite the constant development of new bias mitigation methods for machine learning, no method consistently succeeds, and a fundamental question remains unanswered: when and why do bias mitigation techniques fail? In this paper, we hypothesise that a key factor may be the often-overlooked but crucial step shared by many bias mitigation methods: the definition of subgroups. To investigate this, we conduct a comprehensive evaluation of state-of-the-art bias mitigation methods across multiple vision and language classification tasks, systematically varying subgroup definitions, including coarse, fine-grained, intersectional, and noisy subgroups. Our results reveal that subgroup choice significantly impacts performance, with certain groupings paradoxically leading to worse outcomes than no mitigation at all. Our findings suggest that observing a disparity between a set of subgroups is not a sufficient reason to use those subgroups for mitigation. Through theoretical analysis, we explain these phenomena and uncover a counter-intuitive insight that, in some cases, improving fairness with respect to a particular set of subgroups is best achieved by using a different set of subgroups for mitigation. Our work highlights the importance of careful subgroup definition in bias mitigation and presents it as an alternative lever for improving the robustness and fairness of machine learning models.

Anissa Alloula、Charles Jones、Ben Glocker、Bart?omiej W. Papie?

计算技术、计算机技术

Anissa Alloula,Charles Jones,Ben Glocker,Bart?omiej W. Papie?.Subgroups Matter for Robust Bias Mitigation[EB/OL].(2025-05-27)[2025-06-15].https://arxiv.org/abs/2505.21363.点此复制

评论