|国家预印本平台
首页|Dub-S2ST: Textless Speech-to-Speech Translation for Seamless Dubbing

Dub-S2ST: Textless Speech-to-Speech Translation for Seamless Dubbing

Dub-S2ST: Textless Speech-to-Speech Translation for Seamless Dubbing

来源:Arxiv_logoArxiv
英文摘要

This paper introduces a cross-lingual dubbing system that translates speech from one language to another while preserving key characteristics such as duration, speaker identity, and speaking speed. Despite the strong translation quality of existing speech translation approaches, they often overlook the transfer of speech patterns, leading to mismatches with source speech and limiting their suitability for dubbing applications. To address this, we propose a discrete diffusion-based speech-to-unit translation model with explicit duration control, enabling time-aligned translation. We then synthesize speech based on the predicted units and source identity with a conditional flow matching model. Additionally, we introduce a unit-based speed adaptation mechanism that guides the translation model to produce speech at a rate consistent with the source, without relying on any text. Extensive experiments demonstrate that our framework generates natural and fluent translations that align with the original speech's duration and speaking pace, while achieving competitive translation performance.

Jeongsoo Choi、Jaehun Kim、Joon Son Chung

计算技术、计算机技术

Jeongsoo Choi,Jaehun Kim,Joon Son Chung.Dub-S2ST: Textless Speech-to-Speech Translation for Seamless Dubbing[EB/OL].(2025-05-27)[2025-06-13].https://arxiv.org/abs/2505.20899.点此复制

评论