|国家预印本平台
首页|Context-Driven Dynamic Pruning for Large Speech Foundation Models

Context-Driven Dynamic Pruning for Large Speech Foundation Models

Context-Driven Dynamic Pruning for Large Speech Foundation Models

来源:Arxiv_logoArxiv
英文摘要

Speech foundation models achieve strong generalization across languages and acoustic conditions, but require significant computational resources for inference. In the context of speech foundation models, pruning techniques have been studied that dynamically optimize model structures based on the target audio leveraging external context. In this work, we extend this line of research and propose context-driven dynamic pruning, a technique that optimizes the model computation depending on the context between different input frames and additional context during inference. We employ the Open Whisper-style Speech Model (OWSM) and incorporate speaker embeddings, acoustic event embeddings, and language information as additional context. By incorporating the speaker embedding, our method achieves a reduction of 56.7 GFLOPs while improving BLEU scores by a relative 25.7% compared to the fully fine-tuned OWSM model.

Masao Someki、Shikhar Bharadwaj、Atharva Anand Joshi、Chyi-Jiunn Lin、Jinchuan Tian、Jee-weon Jung、Markus Müller、Nathan Susanj、Jing Liu、Shinji Watanabe

计算技术、计算机技术

Masao Someki,Shikhar Bharadwaj,Atharva Anand Joshi,Chyi-Jiunn Lin,Jinchuan Tian,Jee-weon Jung,Markus Müller,Nathan Susanj,Jing Liu,Shinji Watanabe.Context-Driven Dynamic Pruning for Large Speech Foundation Models[EB/OL].(2025-05-24)[2025-06-28].https://arxiv.org/abs/2505.18860.点此复制

评论