The AI Fairness Myth: A Position Paper on Context-Aware Bias
The AI Fairness Myth: A Position Paper on Context-Aware Bias
Defining fairness in AI remains a persistent challenge, largely due to its deeply context-dependent nature and the lack of a universal definition. While numerous mathematical formulations of fairness exist, they sometimes conflict with one another and diverge from social, economic, and legal understandings of justice. Traditional quantitative definitions primarily focus on statistical comparisons, but they often fail to simultaneously satisfy multiple fairness constraints. Drawing on philosophical theories (Rawls' Difference Principle and Dworkin's theory of equality) and empirical evidence supporting affirmative action, we argue that fairness sometimes necessitates deliberate, context-aware preferential treatment of historically marginalized groups. Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases to promote genuine equality of opportunity. Our approach involves identifying unfairness, recognizing protected groups/individuals, applying corrective strategies, measuring impact, and iterating improvements. By bridging mathematical precision with ethical and contextual considerations, we advocate for an AI fairness paradigm that goes beyond neutrality to actively advance social justice.
Kessia Nepomuceno、Fabio Petrillo
计算技术、计算机技术法律
Kessia Nepomuceno,Fabio Petrillo.The AI Fairness Myth: A Position Paper on Context-Aware Bias[EB/OL].(2025-05-01)[2025-06-21].https://arxiv.org/abs/2505.00965.点此复制
评论