Interpolating Speaker Identities in Embedding Space for Data Expansion
Interpolating Speaker Identities in Embedding Space for Data Expansion
The success of deep learning-based speaker verification systems is largely attributed to access to large-scale and diverse speaker identity data. However, collecting data from more identities is expensive, challenging, and often limited by privacy concerns. To address this limitation, we propose INSIDE (Interpolating Speaker Identities in Embedding Space), a novel data expansion method that synthesizes new speaker identities by interpolating between existing speaker embeddings. Specifically, we select pairs of nearby speaker embeddings from a pretrained speaker embedding space and compute intermediate embeddings using spherical linear interpolation. These interpolated embeddings are then fed to a text-to-speech system to generate corresponding speech waveforms. The resulting data is combined with the original dataset to train downstream models. Experiments show that models trained with INSIDE-expanded data outperform those trained only on real data, achieving 3.06\% to 5.24\% relative improvements. While INSIDE is primarily designed for speaker verification, we also validate its effectiveness on gender classification, where it yields a 13.44\% relative improvement. Moreover, INSIDE is compatible with other augmentation techniques and can serve as a flexible, scalable addition to existing training pipelines.
Tianchi Liu、Ruijie Tao、Qiongqiong Wang、Yidi Jiang、Hardik B. Sailor、Ke Zhang、Jingru Lin、Haizhou Li
计算技术、计算机技术
Tianchi Liu,Ruijie Tao,Qiongqiong Wang,Yidi Jiang,Hardik B. Sailor,Ke Zhang,Jingru Lin,Haizhou Li.Interpolating Speaker Identities in Embedding Space for Data Expansion[EB/OL].(2025-08-26)[2025-09-06].https://arxiv.org/abs/2508.19210.点此复制
评论