Reuse or Generate? Accelerating Code Editing via Edit-Oriented Speculative Decoding
Reuse or Generate? Accelerating Code Editing via Edit-Oriented Speculative Decoding
Large Language Models (LLMs) have demonstrated remarkable capabilities in code editing, substantially enhancing software development productivity. However, the inherent complexity of code editing tasks forces existing approaches to rely on LLMs' autoregressive end-to-end generation, where decoding speed plays a critical role in efficiency. While inference acceleration techniques like speculative decoding are applied to improve the decoding efficiency, these methods fail to account for the unique characteristics of code editing tasks where changes are typically localized and existing code segments are reused. To address this limitation, we propose EfficientEdit, a novel method that improves LLM-based code editing efficiency through two key mechanisms based on speculative decoding: (1) effective reuse of original code segments while identifying potential edit locations, and (2) efficient generate edit content via high-quality drafts from edit-oriented draft models and a dynamic verification mechanism that balances quality and acceleration. Experimental results show that EfficientEdit can achieve up to 10.38$\times$ and 13.09$\times$ speedup compared to standard autoregressive decoding in CanItEdit and CodeIF-Bench, respectively, outperforming state-of-the-art inference acceleration approaches by up to 90.6%.
Peiding Wang、Li Zhang、Fang Liu、Yinghao Zhu、Wang Xu、Lin Shi、Xiaoli Lian、Minxiao Li、Bo Shen、An Fu
计算技术、计算机技术
Peiding Wang,Li Zhang,Fang Liu,Yinghao Zhu,Wang Xu,Lin Shi,Xiaoli Lian,Minxiao Li,Bo Shen,An Fu.Reuse or Generate? Accelerating Code Editing via Edit-Oriented Speculative Decoding[EB/OL].(2025-06-03)[2025-07-03].https://arxiv.org/abs/2506.02780.点此复制
评论