|国家预印本平台
首页|The Mamba in the Llama: Distilling and Accelerating Hybrid Models

The Mamba in the Llama: Distilling and Accelerating Hybrid Models

The Mamba in the Llama: Distilling and Accelerating Hybrid Models

来源:Arxiv_logoArxiv
英文摘要

Linear RNN architectures, like Mamba, can be competitive with Transformer models in language modeling while having advantageous deployment characteristics. Given the focus on training large-scale Transformer models, we consider the challenge of converting these pretrained models for deployment. We demonstrate that it is feasible to distill large Transformers into linear RNNs by reusing the linear projection weights from attention layers with academic GPU resources. The resulting hybrid model, which incorporates a quarter of the attention layers, achieves performance comparable to the original Transformer in chat benchmarks and outperforms open-source hybrid Mamba models trained from scratch with trillions of tokens in both chat benchmarks and general benchmarks. Moreover, we introduce a hardware-aware speculative decoding algorithm that accelerates the inference speed of Mamba and hybrid models. Overall we show how, with limited computation resources, we can remove many of the original attention layers and generate from the resulting model more efficiently. Our top-performing model, distilled from Llama3-8B-Instruct, achieves a 29.61 length-controlled win rate on AlpacaEval 2 against GPT-4 and 7.35 on MT-Bench, surpassing the best 8B scale instruction-tuned linear RNN model. We also find that the distilled model has natural length extrapolation, showing almost perfect accuracy in the needle-in-a-haystack test at 20x the distillation length. Code and pre-trained checkpoints are open-sourced at https://github.com/jxiw/MambaInLlama and https://github.com/itsdaniele/speculative_mamba.

Tri Dao、Junxiong Wang、Alexander M. Rush、Daniele Paliotta、Avner May

计算技术、计算机技术

Tri Dao,Junxiong Wang,Alexander M. Rush,Daniele Paliotta,Avner May.The Mamba in the Llama: Distilling and Accelerating Hybrid Models[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2408.15237.点此复制

评论