FairLangProc: A Python package for fairness in NLP
FairLangProc: A Python package for fairness in NLP
The rise in usage of Large Language Models to near ubiquitousness in recent years has risen societal concern about their applications in decision-making contexts, such as organizational justice or healthcare. This, in turn, poses questions about the fairness of these models in critical settings, which leads to the developement of different procedures to address bias in Natural Language Processing. Although many datasets, metrics and algorithms have been proposed to measure and mitigate harmful prejudice in Natural Language Processing, their implementation is diverse and far from centralized. As a response, this paper presents FairLangProc, a comprehensive Python package providing a common implementation of some of the more recent advances in fairness in Natural Language Processing providing an interface compatible with the famous Hugging Face transformers library, aiming to encourage the widespread use and democratization of bias mitigation techniques. The implementation can be found on https://github.com/arturo-perez-peralta/FairLangProc.
Arturo Pérez-Peralta、Sandra Benítez-Peña、Rosa E. Lillo
计算技术、计算机技术
Arturo Pérez-Peralta,Sandra Benítez-Peña,Rosa E. Lillo.FairLangProc: A Python package for fairness in NLP[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2508.03677.点此复制
评论