|国家预印本平台
首页|Analyzing Fairness of Computer Vision and Natural Language Processing Models

Analyzing Fairness of Computer Vision and Natural Language Processing Models

Analyzing Fairness of Computer Vision and Natural Language Processing Models

来源:Arxiv_logoArxiv
英文摘要

Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models. The primary objective is to present a comparative analysis of the performance of mitigation algorithms from the two fairness libraries. This analysis involves applying the algorithms individually, one at a time, in one of the stages of the ML lifecycle, pre-processing, in-processing, or post-processing, as well as sequentially across more than one stage. The results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model's performance. Publicly available datasets from Kaggle were chosen for this research, providing a practical context for evaluating fairness in real-world machine learning workflows.

Abdelkrim Kallich、Mohamed Eltayeb、Ahmed Rashed

10.3390/info16030182

计算技术、计算机技术科学、科学研究

Abdelkrim Kallich,Mohamed Eltayeb,Ahmed Rashed.Analyzing Fairness of Computer Vision and Natural Language Processing Models[EB/OL].(2025-07-23)[2025-08-16].https://arxiv.org/abs/2412.09900.点此复制

评论