TTS-CtrlNet: Time varying emotion aligned text-to-speech generation with ControlNet
TTS-CtrlNet: Time varying emotion aligned text-to-speech generation with ControlNet
Recent advances in text-to-speech (TTS) have enabled natural speech synthesis, but fine-grained, time-varying emotion control remains challenging. Existing methods often allow only utterance-level control and require full model fine-tuning with a large emotion speech dataset, which can degrade performance. Inspired by adding conditional control to the existing model in ControlNet (Zhang et al, 2023), we propose the first ControlNet-based approach for controllable flow-matching TTS (TTS-CtrlNet), which freezes the original model and introduces a trainable copy of it to process additional conditions. We show that TTS-CtrlNet can boost the pretrained large TTS model by adding intuitive, scalable, and time-varying emotion control while inheriting the ability of the original model (e.g., zero-shot voice cloning & naturalness). Furthermore, we provide practical recipes for adding emotion control: 1) optimal architecture design choice with block analysis, 2) emotion-specific flow step, and 3) flexible control scale. Experiments show that ours can effectively add an emotion controller to existing TTS, and achieves state-of-the-art performance with emotion similarity scores: Emo-SIM and Aro-Val SIM. The project page is available at: https://curryjung.github.io/ttsctrlnet_project_page
Jaeseok Jeong、Yuna Lee、Mingi Kwon、Youngjung Uh
计算技术、计算机技术
Jaeseok Jeong,Yuna Lee,Mingi Kwon,Youngjung Uh.TTS-CtrlNet: Time varying emotion aligned text-to-speech generation with ControlNet[EB/OL].(2025-07-06)[2025-07-21].https://arxiv.org/abs/2507.04349.点此复制
评论