TASU: Text-Only Alignment for Speech Understanding
Jing Peng Yi Yang Xu Li Yu Xi Quanwei Tang Yangui Fang Junjie Li Kai Yu
作者信息
Abstract
Recent advances in Speech Large Language Models (Speech LLMs) have paved the way for unified architectures across diverse speech understanding tasks. However, prevailing alignment paradigms rely heavily on large-scale audio-text paired data and computationally intensive training, yet often exhibit limited generalization to unseen domains or tasks. To address these limitations, we propose TASU (Text-only Alignment for Speech Understanding), a novel alignment paradigm that can leverage only unpaired text data to guide cross-modal alignment. Experiments show that TASU achieves competitive zero-shot speech recognition. Leveraging this property, it can further function as a pre-training stage in curriculum learning, enhancing domain generalization in speech recognition. Ultimately, TASU can extend its zero-shot generalization to a wide range of speech understanding tasks and notably outperforms prominent Speech LLMs including GLM-4-Voice and Step-Audio on the MMSU benchmark, establishing TASU as an efficient and scalable alignment paradigm for Speech LLMs.引用本文复制引用
Jing Peng,Yi Yang,Xu Li,Yu Xi,Quanwei Tang,Yangui Fang,Junjie Li,Kai Yu.TASU: Text-Only Alignment for Speech Understanding[EB/OL].(2026-01-25)[2026-04-03].https://arxiv.org/abs/2511.03310.学科分类
语言学
评论