|国家预印本平台
首页|Whisfusion: Parallel ASR Decoding via a Diffusion Transformer

Whisfusion: Parallel ASR Decoding via a Diffusion Transformer

Whisfusion: Parallel ASR Decoding via a Diffusion Transformer

来源:Arxiv_logoArxiv
英文摘要

Fast Automatic Speech Recognition (ASR) is critical for latency-sensitive applications such as real-time captioning and meeting transcription. However, truly parallel ASR decoding remains challenging due to the sequential nature of autoregressive (AR) decoders and the context limitations of non-autoregressive (NAR) methods. While modern ASR encoders can process up to 30 seconds of audio at once, AR decoders still generate tokens sequentially, creating a latency bottleneck. We propose Whisfusion, the first framework to fuse a pre-trained Whisper encoder with a text diffusion decoder. This NAR architecture resolves the AR latency bottleneck by processing the entire acoustic context in parallel at every decoding step. A lightweight cross-attention adapter trained via parameter-efficient fine-tuning (PEFT) bridges the two modalities. We also introduce a batch-parallel, multi-step decoding strategy that improves accuracy by increasing the number of candidates with minimal impact on speed. Fine-tuned solely on LibriSpeech (960h), Whisfusion achieves a lower WER than Whisper-tiny (8.3% vs. 9.7%), and offers comparable latency on short audio. For longer utterances (>20s), it is up to 2.6x faster than the AR baseline, establishing a new, efficient operating point for long-form ASR. The implementation and training scripts are available at https://github.com/taeyoun811/Whisfusion.

Taeyoun Kwon、Junhyuk Ahn、Taegeun Yun、Heeju Jwa、Yoonchae Choi、Siwon Park、Nam-Joon Kim、Jangchan Kim、Hyun Gon Ryu、Hyuk-Jae Lee

计算技术、计算机技术通信

Taeyoun Kwon,Junhyuk Ahn,Taegeun Yun,Heeju Jwa,Yoonchae Choi,Siwon Park,Nam-Joon Kim,Jangchan Kim,Hyun Gon Ryu,Hyuk-Jae Lee.Whisfusion: Parallel ASR Decoding via a Diffusion Transformer[EB/OL].(2025-08-09)[2025-08-24].https://arxiv.org/abs/2508.07048.点此复制

评论