|国家预印本平台
首页|End-to-End Speaker-Attributed ASR with Transformer

End-to-End Speaker-Attributed ASR with Transformer

End-to-End Speaker-Attributed ASR with Transformer

来源:Arxiv_logoArxiv
英文摘要

This paper presents our recent effort on end-to-end speaker-attributed automatic speech recognition, which jointly performs speaker counting, speech recognition and speaker identification for monaural multi-talker audio. Firstly, we thoroughly update the model architecture that was previously designed based on a long short-term memory (LSTM)-based attention encoder decoder by applying transformer architectures. Secondly, we propose a speaker deduplication mechanism to reduce speaker identification errors in highly overlapped regions. Experimental results on the LibriSpeechMix dataset shows that the transformer-based architecture is especially good at counting the speakers and that the proposed model reduces the speaker-attributed word error rate by 47% over the LSTM-based baseline. Furthermore, for the LibriCSS dataset, which consists of real recordings of overlapped speech, the proposed model achieves concatenated minimum-permutation word error rates of 11.9% and 16.3% with and without target speaker profiles, respectively, both of which are the state-of-the-art results for LibriCSS with the monaural setting.

Takuya Yoshioka、Zhuo Chen、Naoyuki Kanda、Yashesh Gaur、Zhong Meng、Xiaofei Wang、Guoli Ye

计算技术、计算机技术通信无线通信

Takuya Yoshioka,Zhuo Chen,Naoyuki Kanda,Yashesh Gaur,Zhong Meng,Xiaofei Wang,Guoli Ye.End-to-End Speaker-Attributed ASR with Transformer[EB/OL].(2021-04-05)[2025-07-16].https://arxiv.org/abs/2104.02128.点此复制

评论