Rethinking Algorithmic Fairness for Human-AI Collaboration
Rethinking Algorithmic Fairness for Human-AI Collaboration
Existing approaches to algorithmic fairness aim to ensure equitable outcomes if human decision-makers comply perfectly with algorithmic decisions. However, perfect compliance with the algorithm is rarely a reality or even a desirable outcome in human-AI collaboration. Yet, recent studies have shown that selective compliance with fair algorithms can amplify discrimination relative to the prior human policy. As a consequence, ensuring equitable outcomes requires fundamentally different algorithmic design principles that ensure robustness to the decision-maker's (a priori unknown) compliance pattern. We define the notion of compliance-robustly fair algorithmic recommendations that are guaranteed to (weakly) improve fairness in decisions, regardless of the human's compliance pattern. We propose a simple optimization strategy to identify the best performance-improving compliance-robustly fair policy. However, we show that it may be infeasible to design algorithmic recommendations that are simultaneously fair in isolation, compliance-robustly fair, and more accurate than the human policy; thus, if our goal is to improve the equity and accuracy of human-AI collaboration, it may not be desirable to enforce traditional algorithmic fairness constraints. We illustrate the value of our approach on criminal sentencing data before and after the introduction of an algorithmic risk assessment tool in Virginia.
Haosen Ge、Hamsa Bastani、Osbert Bastani
计算技术、计算机技术
Haosen Ge,Hamsa Bastani,Osbert Bastani.Rethinking Algorithmic Fairness for Human-AI Collaboration[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2310.03647.点此复制
评论