Multi-Utterance Speech Separation and Association Trained on Short Segments
Multi-Utterance Speech Separation and Association Trained on Short Segments
Current deep neural network (DNN) based speech separation faces a fundamental challenge -- while the models need to be trained on short segments due to computational constraints, real-world applications typically require processing significantly longer recordings with multiple utterances per speaker than seen during training. In this paper, we investigate how existing approaches perform in this challenging scenario and propose a frequency-temporal recurrent neural network (FTRNN) that effectively bridges this gap. Our FTRNN employs a full-band module to model frequency dependencies within each time frame and a sub-band module that models temporal patterns in each frequency band. Despite being trained on short fixed-length segments of 10 s, our model demonstrates robust separation when processing signals significantly longer than training segments (21-121 s) and preserves speaker association across utterance gaps exceeding those seen during training. Unlike the conventional segment-separation-stitch paradigm, our lightweight approach (0.9 M parameters) performs inference on long audio without segmentation, eliminating segment boundary distortions while simplifying deployment. Experimental results demonstrate the generalization ability of FTRNN for multi-utterance speech separation and speaker association.
Yuzhu Wang、Archontis Politis、Konstantinos Drossos、Tuomas Virtanen
计算技术、计算机技术
Yuzhu Wang,Archontis Politis,Konstantinos Drossos,Tuomas Virtanen.Multi-Utterance Speech Separation and Association Trained on Short Segments[EB/OL].(2025-07-03)[2025-07-16].https://arxiv.org/abs/2507.02562.点此复制
评论