How to Evaluate Automatic Speech Recognition: Comparing Different Performance and Bias Measures
How to Evaluate Automatic Speech Recognition: Comparing Different Performance and Bias Measures
There is increasingly more evidence that automatic speech recognition (ASR) systems are biased against different speakers and speaker groups, e.g., due to gender, age, or accent. Research on bias in ASR has so far primarily focused on detecting and quantifying bias, and developing mitigation approaches. Despite this progress, the open question is how to measure the performance and bias of a system. In this study, we compare different performance and bias measures, from literature and proposed, to evaluate state-of-the-art end-to-end ASR systems for Dutch. Our experiments use several bias mitigation strategies to address bias against different speaker groups. The findings reveal that averaged error rates, a standard in ASR research, alone is not sufficient and should be supplemented by other measures. The paper ends with recommendations for reporting ASR performance and bias to better represent a system's performance for diverse speaker groups, and overall system bias.
Tanvina Patel、Wiebke Hutiri、Aaron Yi Ding、Odette Scharenborg
计算技术、计算机技术
Tanvina Patel,Wiebke Hutiri,Aaron Yi Ding,Odette Scharenborg.How to Evaluate Automatic Speech Recognition: Comparing Different Performance and Bias Measures[EB/OL].(2025-07-08)[2025-07-19].https://arxiv.org/abs/2507.05885.点此复制
评论