|国家预印本平台
首页|L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models

L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models

L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have achieved notable progress. Despite their success, next-token prediction (NTP), the dominant method for LLM training and inference, is constrained in both contextual coverage and inference efficiency due to its inherently sequential process. To overcome these challenges, we propose leap multi-token prediction~(L-MTP), an innovative token prediction method that extends the capabilities of multi-token prediction (MTP) by introducing a leap-based mechanism. Unlike conventional MTP, which generates multiple tokens at adjacent positions, L-MTP strategically skips over intermediate tokens, predicting non-sequential ones in a single forward pass. This structured leap not only enhances the model's ability to capture long-range dependencies but also enables a decoding strategy specially optimized for non-sequential leap token generation, effectively accelerating inference. We theoretically demonstrate the benefit of L-MTP in improving inference efficiency. Experiments across diverse benchmarks validate its merit in boosting both LLM performance and inference speed. The source code will be publicly available.

Xiaohao Liu、Xiaobo Xia、Weixiang Zhao、Manyi Zhang、Xianzhi Yu、Xiu Su、Shuo Yang、See-Kiong Ng、Tat-Seng Chua

计算技术、计算机技术

Xiaohao Liu,Xiaobo Xia,Weixiang Zhao,Manyi Zhang,Xianzhi Yu,Xiu Su,Shuo Yang,See-Kiong Ng,Tat-Seng Chua.L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models[EB/OL].(2025-05-23)[2025-06-07].https://arxiv.org/abs/2505.17505.点此复制

评论