|国家预印本平台
首页|Hierarchical Emotion Prediction and Control in Text-to-Speech Synthesis

Hierarchical Emotion Prediction and Control in Text-to-Speech Synthesis

Hierarchical Emotion Prediction and Control in Text-to-Speech Synthesis

来源:Arxiv_logoArxiv
英文摘要

It remains a challenge to effectively control the emotion rendering in text-to-speech (TTS) synthesis. Prior studies have primarily focused on learning a global prosodic representation at the utterance level, which strongly correlates with linguistic prosody. Our goal is to construct a hierarchical emotion distribution (ED) that effectively encapsulates intensity variations of emotions at various levels of granularity, encompassing phonemes, words, and utterances. During TTS training, the hierarchical ED is extracted from the ground-truth audio and guides the predictor to establish a connection between emotional and linguistic prosody. At run-time inference, the TTS model generates emotional speech and, at the same time, provides quantitative control of emotion over the speech constituents. Both objective and subjective evaluations validate the effectiveness of the proposed framework in terms of emotion prediction and control.

Kun Zhou、Sho Inoue、Haizhou Li、Shuai Wang

10.1109/ICASSP48485.2024.10445996

计算技术、计算机技术

Kun Zhou,Sho Inoue,Haizhou Li,Shuai Wang.Hierarchical Emotion Prediction and Control in Text-to-Speech Synthesis[EB/OL].(2024-05-15)[2025-07-21].https://arxiv.org/abs/2405.09171.点此复制

评论