Toward Automated Validation of Language Model Synthesized Test Cases using Semantic Entropy
Toward Automated Validation of Language Model Synthesized Test Cases using Semantic Entropy
Modern Large Language Model (LLM)-based programming agents often rely on test execution feedback to refine their generated code. These tests are synthetically generated by LLMs. However, LLMs may produce invalid or hallucinated test cases, which can mislead feedback loops and degrade the performance of agents in refining and improving code. This paper introduces VALTEST, a novel framework that leverages semantic entropy to automatically validate test cases generated by LLMs. Analyzing the semantic structure of test cases and computing entropy-based uncertainty measures, VALTEST trains a machine learning model to classify test cases as valid or invalid and filters out invalid test cases. Experiments on multiple benchmark datasets and various LLMs show that VALTEST not only boosts test validity by up to 29% but also improves code generation performance, as evidenced by significant increases in pass@1 scores. Our extensive experiments also reveal that semantic entropy is a reliable indicator to distinguish between valid and invalid test cases, which provides a robust solution for improving the correctness of LLM-generated test cases used in software testing and code generation.
Hamed Taherkhani、Jiho Shin、Muhammad Ammar Tahir、Md Rakib Hossain Misu、Vineet Sunil Gattani、Hadi Hemmati
计算技术、计算机技术
Hamed Taherkhani,Jiho Shin,Muhammad Ammar Tahir,Md Rakib Hossain Misu,Vineet Sunil Gattani,Hadi Hemmati.Toward Automated Validation of Language Model Synthesized Test Cases using Semantic Entropy[EB/OL].(2025-07-29)[2025-08-06].https://arxiv.org/abs/2411.08254.点此复制
评论