|国家预印本平台
首页|Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection

Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection

Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection

来源:Arxiv_logoArxiv
英文摘要

The performance bottleneck of Automatic Speech Recognition (ASR) in stuttering speech scenarios has limited its applicability in domains such as speech rehabilitation. This paper proposed an LLM-driven ASR-SED multi-task learning framework that jointly optimized the ASR and Stuttering Event Detection (SED) tasks. We proposed a dynamic interaction mechanism where the ASR branch leveraged CTC-generated soft prompts to assist LLM context modeling, while the SED branch output stutter embeddings to enhance LLM comprehension of stuttered speech. We incorporated contrastive learning to strengthen the discriminative power of stuttering acoustic features and applied Focal Loss to mitigate the long-tailed distribution in stuttering event categories. Evaluations on the AS-70 Mandarin stuttering dataset demonstrated that our framework reduced the ASR character error rate (CER) to 5.45% (-37.71% relative reduction) and achieved an average SED F1-score of 73.63% (+46.58% relative improvement).

Shangkun Huang、Jing Deng、Jintao Kang、Rong Zheng

计算技术、计算机技术汉语

Shangkun Huang,Jing Deng,Jintao Kang,Rong Zheng.Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection[EB/OL].(2025-05-28)[2025-07-21].https://arxiv.org/abs/2505.22005.点此复制

评论