|国家预印本平台
首页|zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning

zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning

zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning

来源:Arxiv_logoArxiv
英文摘要

The advent of large language models (LLMs) has greatly advanced artificial intelligence (AI) in software engineering (SE), with code embeddings playing a critical role in tasks like code-clone detection and code clustering. However, existing methods for code embedding, including those based on LLMs, often depend on costly supervised training or fine-tuning for domain adaptation. This paper proposes a novel zero-shot approach, zsLLMCode, to generate code embeddings by using LLMs and sentence embedding models. This approach attempts to eliminate the need for task-specific training or fine-tuning, and to effectively address the issue of erroneous information commonly found in LLM-generated outputs. We conducted a series of experiments to evaluate the performance of the proposed approach by considering various LLMs and embedding models. The results have demonstrated the effectiveness and superiority of our method zsLLMCode over state-of-the-art unsupervised approaches such as SourcererCC, Code2vec, InferCode, and TransformCode. Our findings highlight the potential of zsLLMCode to advance the field of SE by providing robust and efficient solutions for code embedding tasks.

Zhenyu Chen、Zixiang Xian、Chunrong Fang、Chenhui Cui、Rubing Huang

计算技术、计算机技术

Zhenyu Chen,Zixiang Xian,Chunrong Fang,Chenhui Cui,Rubing Huang.zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning[EB/OL].(2024-09-22)[2025-04-26].https://arxiv.org/abs/2409.14644.点此复制

评论