|国家预印本平台
首页|Quality Assessment of Python Tests Generated by Large Language Models

Quality Assessment of Python Tests Generated by Large Language Models

Quality Assessment of Python Tests Generated by Large Language Models

来源:Arxiv_logoArxiv
英文摘要

The manual generation of test scripts is a time-intensive, costly, and error-prone process, indicating the value of automated solutions. Large Language Models (LLMs) have shown great promise in this domain, leveraging their extensive knowledge to produce test code more efficiently. This study investigates the quality of Python test code generated by three LLMs: GPT-4o, Amazon Q, and LLama 3.3. We evaluate the structural reliability of test suites generated under two distinct prompt contexts: Text2Code (T2C) and Code2Code (C2C). Our analysis includes the identification of errors and test smells, with a focus on correlating these issues to inadequate design patterns. Our findings reveal that most test suites generated by the LLMs contained at least one error or test smell. Assertion errors were the most common, comprising 64% of all identified errors, while the test smell Lack of Cohesion of Test Cases was the most frequently detected (41%). Prompt context significantly influenced test quality; textual prompts with detailed instructions often yielded tests with fewer errors but a higher incidence of test smells. Among the evaluated LLMs, GPT-4o produced the fewest errors in both contexts (10% in C2C and 6% in T2C), whereas Amazon Q had the highest error rates (19% in C2C and 28% in T2C). For test smells, Amazon Q had fewer detections in the C2C context (9%), while LLama 3.3 performed best in the T2C context (10%). Additionally, we observed a strong relationship between specific errors, such as assertion or indentation issues, and test case cohesion smells. These findings demonstrate opportunities for improving the quality of test generation by LLMs and highlight the need for future research to explore optimized generation scenarios and better prompt engineering strategies.

Victor Alves、Carla Bezerra、Ivan Machado、Larissa Rocha、Tássio Virgínio、Publio Silva

计算技术、计算机技术

Victor Alves,Carla Bezerra,Ivan Machado,Larissa Rocha,Tássio Virgínio,Publio Silva.Quality Assessment of Python Tests Generated by Large Language Models[EB/OL].(2025-06-17)[2025-06-29].https://arxiv.org/abs/2506.14297.点此复制

评论