|国家预印本平台
首页|Input Conditioned Layer Dropping in Speech Foundation Models

Input Conditioned Layer Dropping in Speech Foundation Models

Input Conditioned Layer Dropping in Speech Foundation Models

来源:Arxiv_logoArxiv
英文摘要

Curating foundation speech models for edge and IoT settings, where computational resources vary over time, requires dynamic architectures featuring adaptable reduction strategies. One emerging approach is layer dropping ($\mathcal{LD}$) which skips fraction of the layers of a backbone network during inference to reduce the computational load. This allows transforming static models into dynamic ones. However, existing approaches exhibit limitations either in the mode of selecting layers or by significantly modifying the neural architecture. To this end, we propose input-driven $\mathcal{LD}$ that employs the network's input features and a lightweight layer selecting network to determine the optimum combination of processing layers. Extensive experimentation on 4 speech and audio public benchmarks, using two different pre-trained foundation models, demonstrates the effectiveness of our approach, thoroughly outperforming random dropping and producing on-par (or better) results to early exit.

Abdul Hannan、Daniele Falavigna、Alessio Brutti

计算技术、计算机技术

Abdul Hannan,Daniele Falavigna,Alessio Brutti.Input Conditioned Layer Dropping in Speech Foundation Models[EB/OL].(2025-07-10)[2025-07-20].https://arxiv.org/abs/2507.07954.点此复制

评论