|国家预印本平台
首页|Joint ASR and Speaker Role Tagging with Serialized Output Training

Joint ASR and Speaker Role Tagging with Serialized Output Training

Joint ASR and Speaker Role Tagging with Serialized Output Training

来源:Arxiv_logoArxiv
英文摘要

Automatic Speech Recognition systems have made significant progress with large-scale pre-trained models. However, most current systems focus solely on transcribing the speech without identifying speaker roles, a function that is critical for conversational AI. In this work, we investigate the use of serialized output training (SOT) for joint ASR and speaker role tagging. By augmenting Whisper with role-specific tokens and fine-tuning it with SOT, we enable the model to generate role-aware transcriptions in a single decoding pass. We compare the SOT approach against a self-supervised previous baseline method on two real-world conversational datasets. Our findings show that this approach achieves more than 10% reduction in multi-talker WER, demonstrating its feasibility as a unified model for speaker-role aware speech transcription.

Anfeng Xu、Tiantian Feng、Shrikanth Narayanan

计算技术、计算机技术

Anfeng Xu,Tiantian Feng,Shrikanth Narayanan.Joint ASR and Speaker Role Tagging with Serialized Output Training[EB/OL].(2025-06-12)[2025-07-19].https://arxiv.org/abs/2506.10349.点此复制

评论