|国家预印本平台
首页|OpusLM: A Family of Open Unified Speech Language Models

OpusLM: A Family of Open Unified Speech Language Models

OpusLM: A Family of Open Unified Speech Language Models

来源:Arxiv_logoArxiv
英文摘要

This paper presents Open Unified Speech Language Models (OpusLMs), a family of open foundational speech language models (SpeechLMs) up to 7B. Initialized from decoder-only text language models, the OpusLMs are continuously pre-trained on 213K hours of speech-text pairs and 292B text-only tokens. We demonstrate our OpusLMs achieve comparable (or even superior) performance with existing SpeechLMs in speech recognition, speech synthesis, and text-only capabilities. Technically, this paper articulates our SpeechLM designs on tokenization, multi-stream language models, and multi-stage training strategies. We experimentally demonstrate the importance of model size scaling and the effect of annealing data selection. The OpusLMs are all built from publicly available materials and are fully transparent models. We release our code, data, checkpoints, and training logs to facilitate open SpeechLM research

Shinji Watanabe、Jinchuan Tian、William Chen、Yifan Peng、Jiatong Shi、Siddhant Arora、Shikhar Bharadwaj、Takashi Maekaku、Yusuke Shinohara、Keita Goto、Xiang Yue、Huck Yang

语言学

Shinji Watanabe,Jinchuan Tian,William Chen,Yifan Peng,Jiatong Shi,Siddhant Arora,Shikhar Bharadwaj,Takashi Maekaku,Yusuke Shinohara,Keita Goto,Xiang Yue,Huck Yang.OpusLM: A Family of Open Unified Speech Language Models[EB/OL].(2025-06-21)[2025-07-16].https://arxiv.org/abs/2506.17611.点此复制

评论