|国家预印本平台
首页|ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems

ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems

ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems

来源:Arxiv_logoArxiv
英文摘要

Automatic Speech Recognition (ASR) systems have become ubiquitous in everyday applications, yet significant disparities in performance across diverse demographic groups persist. In this work, we introduce the ASR-FAIRBENCH leaderboard which is designed to assess both the accuracy and equity of ASR models in real-time. Leveraging the Meta's Fair-Speech dataset, which captures diverse demographic characteristics, we employ a mixed-effects Poisson regression model to derive an overall fairness score. This score is integrated with traditional metrics like Word Error Rate (WER) to compute the Fairness Adjusted ASR Score (FAAS), providing a comprehensive evaluation framework. Our approach reveals significant performance disparities in SOTA ASR models across demographic groups and offers a benchmark to drive the development of more inclusive ASR technologies.

Anand Rai、Satyam Rahangdale、Utkarsh Anand、Animesh Mukherjee

计算技术、计算机技术

Anand Rai,Satyam Rahangdale,Utkarsh Anand,Animesh Mukherjee.ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems[EB/OL].(2025-05-16)[2025-06-04].https://arxiv.org/abs/2505.11572.点此复制

评论