|国家预印本平台
首页|On the Effect of Token Merging on Pre-trained Models for Code

On the Effect of Token Merging on Pre-trained Models for Code

On the Effect of Token Merging on Pre-trained Models for Code

来源:Arxiv_logoArxiv
英文摘要

Tokenization is a fundamental component of language models for code. It involves breaking down the input into units that are later passed to the language model stack to learn high-dimensional representations used in various contexts, from classification to generation. However, the output of these tokenizers is often longer than that traditionally used in compilers and interpreters. This could result in undesirable effects, such as increased computational overhead. In this work, we investigate the effect of merging the hidden representations of subtokens that belong to the same semantic unit, such as subtokens that form a single identifier. We propose two strategies: one based on averaging the representations and another that leverages a learning-based approach. Both methods can be seamlessly integrated with existing language models for code. We conduct experiments using six language models for code: CodeBERT, GraphCodeBERT, UniXCoder, CdoeT5, CodeT5+ (220M), and CodeT5+ (770M), across three software engineering tasks: vulnerability detection, code classification, and code translation. Results show that these strategies can reduce the number of floating-point operations by $1\%$ to $19\%$. Regarding downstream performance, the most significant degradation was observed in the vulnerability detection task, where the F1 score decreased by $1.82$ points compared to the baseline. In contrast, for code translation, we observed an improvement of $2.47$ points in CodeBLEU. This work contributes to the broader effort of improving language models for code across multiple dimensions, including both computational efficiency and downstream performance.

Mootez Saad、Hao Li、Tushar Sharma、Ahmed E. Hassan

计算技术、计算机技术

Mootez Saad,Hao Li,Tushar Sharma,Ahmed E. Hassan.On the Effect of Token Merging on Pre-trained Models for Code[EB/OL].(2025-07-19)[2025-08-16].https://arxiv.org/abs/2507.14423.点此复制

评论