|国家预印本平台
首页|Long-Form Speech Generation with Spoken Language Models

Long-Form Speech Generation with Spoken Language Models

Long-Form Speech Generation with Spoken Language Models

来源:Arxiv_logoArxiv
英文摘要

We consider the generative modeling of speech over multiple minutes, a requirement for long-form multimedia generation and audio-native voice assistants. However, textless spoken language models struggle to generate plausible speech past tens of seconds, due to high temporal resolution of speech tokens causing loss of coherence, architectural issues with long-sequence training or extrapolation, and memory costs at inference time. From these considerations we derive SpeechSSM, the first speech language model family to learn from and sample long-form spoken audio (e.g., 16 minutes of read or extemporaneous speech) in a single decoding session without text intermediates. SpeechSSMs leverage recent advances in linear-time sequence modeling to greatly surpass current Transformer spoken LMs in coherence and efficiency on multi-minute generations while still matching them at the utterance level. As we found current spoken language evaluations uninformative, especially in this new long-form setting, we also introduce: LibriSpeech-Long, a benchmark for long-form speech evaluation; new embedding-based and LLM-judged metrics; and quality measurements over length and time. Speech samples, the LibriSpeech-Long dataset, and any future code or model releases can be found at https://google.github.io/tacotron/publications/speechssm/.

Se Jin Park、Julian Salazar、Keisuke Kinoshita、Aren Jansen、Yong Man Ro、RJ Skerry-Ryan

计算技术、计算机技术

Se Jin Park,Julian Salazar,Keisuke Kinoshita,Aren Jansen,Yong Man Ro,RJ Skerry-Ryan.Long-Form Speech Generation with Spoken Language Models[EB/OL].(2025-07-10)[2025-07-20].https://arxiv.org/abs/2412.18603.点此复制

评论