|国家预印本平台
首页|Boosting CTC-Based ASR Using LLM-Based Intermediate Loss Regularization

Boosting CTC-Based ASR Using LLM-Based Intermediate Loss Regularization

Boosting CTC-Based ASR Using LLM-Based Intermediate Loss Regularization

来源:Arxiv_logoArxiv
英文摘要

End-to-end (E2E) automatic speech recognition (ASR) systems have revolutionized the field by integrating all components into a single neural network, with attention-based encoder-decoder models achieving state-of-the-art performance. However, their autoregressive decoding process limits inference speed, making them unsuitable for real-time applications. In contrast, CTC-based models offer faster, non-autoregressive decoding but struggle to model linguistic dependencies effectively. Addressing this challenge, we propose a novel auxiliary loss framework called Language-Aware Intermediate Loss (LAIL) to enhance CTC-based ASR using the linguistic knowledge of large language models (LLMs). By attaching connector layers to intermediate encoder layers, LAIL maps outputs to the embedding space of an LLM and computes a causal language modeling loss during training. This approach enhances linguistic modeling while preserving the computational efficiency of CTC decoding. Using the Conformer architecture and various LLaMA models, we demonstrate significant improvements in Word Error Rate (WER) on the LibriSpeech, TEDLIUM2, and WSJ corpora, achieving state-of-the-art performance for CTC-based ASR with minimal computational overhead.

Duygu Altinok

计算技术、计算机技术

Duygu Altinok.Boosting CTC-Based ASR Using LLM-Based Intermediate Loss Regularization[EB/OL].(2025-06-28)[2025-07-16].https://arxiv.org/abs/2506.22846.点此复制

评论