|国家预印本平台
首页|Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints

Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints

Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints

来源:Arxiv_logoArxiv
英文摘要

Fairness in machine learning has become a critical concern, particularly in high-stakes applications. Existing approaches often focus on achieving full fairness across all score ranges generated by predictive models, ensuring fairness in both high and low-scoring populations. However, this stringent requirement can compromise predictive performance and may not align with the practical fairness concerns of stakeholders. In this work, we propose a novel framework for building partially fair machine learning models, which enforce fairness within a specific score range of interest, such as the middle range where decisions are most contested, while maintaining flexibility in other regions. We introduce two statistical metrics to rigorously evaluate partial fairness within a given score range, such as the top 20%-40% of scores. To achieve partial fairness, we propose an in-processing method by formulating the model training problem as constrained optimization with difference-of-convex constraints, which can be solved by an inexact difference-of-convex algorithm (IDCA). We provide the complexity analysis of IDCA for finding a nearly KKT point. Through numerical experiments on real-world datasets, we demonstrate that our framework achieves high predictive performance while enforcing partial fairness where it matters most.

Yutian He、Yankun Huang、Yao Yao、Qihang Lin

计算技术、计算机技术

Yutian He,Yankun Huang,Yao Yao,Qihang Lin.Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints[EB/OL].(2025-05-18)[2025-06-07].https://arxiv.org/abs/2505.12530.点此复制

评论