A Comprehensive Study on Large Language Models for Mutation Testing
A Comprehensive Study on Large Language Models for Mutation Testing
Large Language Models (LLMs) have recently been used to generate mutants in both research work and in industrial practice. However, there has been no comprehensive empirical study of their performance for this increasingly important LLM-based Software Engineering application. To address this, we report the results of a comprehensive empirical study over six different LLMs, including both state-of-the-art open- and closed-source models, on 851 real bugs drawn from two different Java real-world bug benchmarks. Our results reveal that, compared to existing rule-based approaches, LLMs generate more diverse mutants, that are behaviorally closer to real bugs and, most importantly, with 90.1% higher fault detection. That is, 79.1% (for LLMs) vs. 41.6% (for rule-based); an increase of 37.5 percentage points. Nevertheless, our results also reveal that these impressive results for improved effectiveness come at a cost: the LLM-generated mutants have worse non-compilability, duplication, and equivalent mutant rates by 36.1, 13.1, and 4.2 percentage points, respectively. These findings are immediately actionable for both research and practice. They allow practitioners to have greater confidence in deploying LLM-based mutation, while researchers now have a baseline for the state-of-the-art, with which they can research techniques to further improve effectiveness and reduce cost.
Bo Wang、Mingda Chen、Youfang Lin、Mark Harman、Mike Papadakis、Jie M. Zhang
计算技术、计算机技术
Bo Wang,Mingda Chen,Youfang Lin,Mark Harman,Mike Papadakis,Jie M. Zhang.A Comprehensive Study on Large Language Models for Mutation Testing[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2406.09843.点此复制
评论