|国家预印本平台
首页|Assessing Small Language Models for Code Generation: An Empirical Study with Benchmarks

Assessing Small Language Models for Code Generation: An Empirical Study with Benchmarks

Assessing Small Language Models for Code Generation: An Empirical Study with Benchmarks

来源:Arxiv_logoArxiv
英文摘要

The recent advancements of Small Language Models (SLMs) have opened new possibilities for efficient code generation. SLMs offer lightweight and cost-effective alternatives to Large Language Models (LLMs), making them attractive for use in resource-constrained environments. However, empirical understanding of SLMs, particularly their capabilities, limitations, and performance trade-offs in code generation remains limited. This study presents a comprehensive empirical evaluation of 20 open-source SLMs ranging from 0.4B to 10B parameters on five diverse code-related benchmarks (HumanEval, MBPP, Mercury, HumanEvalPack, and CodeXGLUE). The models are assessed along three dimensions: i) functional correctness of generated code, ii) computational efficiency and iii) performance across multiple programming languages. The findings of this study reveal that several compact SLMs achieve competitive results while maintaining a balance between performance and efficiency, making them viable for deployment in resource-constrained environments. However, achieving further improvements in accuracy requires switching to larger models. These models generally outperform their smaller counterparts, but they require much more computational power. We observe that for 10% performance improvements, models can require nearly a 4x increase in VRAM consumption, highlighting a trade-off between effectiveness and scalability. Besides, the multilingual performance analysis reveals that SLMs tend to perform better in languages such as Python, Java, and PHP, while exhibiting relatively weaker performance in Go, C++, and Ruby. However, statistical analysis suggests these differences are not significant, indicating a generalizability of SLMs across programming languages. Based on the findings, this work provides insights into the design and selection of SLMs for real-world code generation tasks.

Md Mahade Hasan、Muhammad Waseem、Kai-Kristian Kemell、Jussi Rasku、Juha Ala-Rantala、Pekka Abrahamsson

计算技术、计算机技术

Md Mahade Hasan,Muhammad Waseem,Kai-Kristian Kemell,Jussi Rasku,Juha Ala-Rantala,Pekka Abrahamsson.Assessing Small Language Models for Code Generation: An Empirical Study with Benchmarks[EB/OL].(2025-07-09)[2025-07-16].https://arxiv.org/abs/2507.03160.点此复制

评论