Customizing Speech Recognition Model with Large Language Model Feedback
Customizing Speech Recognition Model with Large Language Model Feedback
Automatic speech recognition (ASR) systems have achieved strong performance on general transcription tasks. However, they continue to struggle with recognizing rare named entities and adapting to domain mismatches. In contrast, large language models (LLMs), trained on massive internet-scale datasets, are often more effective across a wide range of domains. In this work, we propose a reinforcement learning based approach for unsupervised domain adaptation, leveraging unlabeled data to enhance transcription quality, particularly the named entities affected by domain mismatch, through feedback from a LLM. Given contextual information, our framework employs a LLM as the reward model to score the hypotheses from the ASR model. These scores serve as reward signals to fine-tune the ASR model via reinforcement learning. Our method achieves a 21\% improvement on entity word error rate over conventional self-training methods.
Shaoshi Ling、Guoli Ye
计算技术、计算机技术
Shaoshi Ling,Guoli Ye.Customizing Speech Recognition Model with Large Language Model Feedback[EB/OL].(2025-06-05)[2025-06-23].https://arxiv.org/abs/2506.11091.点此复制
评论