|国家预印本平台
| 注册
首页|Vocoder-Projected Feature Discriminator

Vocoder-Projected Feature Discriminator

Vocoder-Projected Feature Discriminator

来源:Arxiv_logoArxiv
英文摘要

In text-to-speech (TTS) and voice conversion (VC), acoustic features, such as mel spectrograms, are typically used as synthesis or conversion targets owing to their compactness and ease of learning. However, because the ultimate goal is to generate high-quality waveforms, employing a vocoder to convert these features into waveforms and applying adversarial training in the time domain is reasonable. Nevertheless, upsampling the waveform introduces significant time and memory overheads. To address this issue, we propose a vocoder-projected feature discriminator (VPFD), which uses vocoder features for adversarial training. Experiments on diffusion-based VC distillation demonstrated that a pretrained and frozen vocoder feature extractor with a single upsampling step is necessary and sufficient to achieve a VC performance comparable to that of waveform discriminators while reducing the training time and memory consumption by 9.6 and 11.4 times, respectively.

Takuhiro Kaneko、Hirokazu Kameoka、Kou Tanaka、Yuto Kondo

电子技术应用

Takuhiro Kaneko,Hirokazu Kameoka,Kou Tanaka,Yuto Kondo.Vocoder-Projected Feature Discriminator[EB/OL].(2025-08-27)[2025-09-10].https://arxiv.org/abs/2508.17874.点此复制

评论