|国家预印本平台
首页|When majority rules, minority loses: bias amplification of gradient descent

When majority rules, minority loses: bias amplification of gradient descent

When majority rules, minority loses: bias amplification of gradient descent

来源:Arxiv_logoArxiv
英文摘要

Despite growing empirical evidence of bias amplification in machine learning, its theoretical foundations remain poorly understood. We develop a formal framework for majority-minority learning tasks, showing how standard training can favor majority groups and produce stereotypical predictors that neglect minority-specific features. Assuming population and variance imbalance, our analysis reveals three key findings: (i) the close proximity between ``full-data'' and stereotypical predictors, (ii) the dominance of a region where training the entire model tends to merely learn the majority traits, and (iii) a lower bound on the additional training required. Our results are illustrated through experiments in deep learning for tabular and image classification tasks.

Fran?ois Bachoc、Jér?me Bolte、Ryan Boustany、Jean-Michel Loubes

IMTTSE-RTSE-RIMT

计算技术、计算机技术

Fran?ois Bachoc,Jér?me Bolte,Ryan Boustany,Jean-Michel Loubes.When majority rules, minority loses: bias amplification of gradient descent[EB/OL].(2025-05-19)[2025-06-17].https://arxiv.org/abs/2505.13122.点此复制

评论