Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech
Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech
Automatic speech recognition (ASR) systems struggle with dysarthric speech due to high inter-speaker variability and slow speaking rates. To address this, we explore dysarthric-to-healthy speech conversion for improved ASR performance. Our approach extends the Rhythm and Voice (RnV) conversion framework by introducing a syllable-based rhythm modeling method suited for dysarthric speech. We assess its impact on ASR by training LF-MMI models and fine-tuning Whisper on converted speech. Experiments on the Torgo corpus reveal that LF-MMI achieves significant word error rate reductions, especially for more severe cases of dysarthria, while fine-tuning Whisper on converted data has minimal effect on its performance. These results highlight the potential of unsupervised rhythm and voice conversion for dysarthric ASR. Code available at: https://github.com/idiap/RnV
Karl El Hajal、Enno Hermann、Sevada Hovsepyan、Mathew Magimai. -Doss
计算技术、计算机技术
Karl El Hajal,Enno Hermann,Sevada Hovsepyan,Mathew Magimai. -Doss.Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech[EB/OL].(2025-06-02)[2025-06-22].https://arxiv.org/abs/2506.01618.点此复制
评论