Don't Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation
Don't Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation
With the growing use of large language models(LLMs) as evaluators, their application has expanded to code evaluation tasks, where they assess the correctness of generated code without relying on reference implementations. While this offers scalability and flexibility, it also raises a critical, unresolved question: Can LLM judges fairly and robustly evaluate semantically equivalent code with superficial variations? Functionally correct code often exhibits variations-such as differences in variable names, comments, or formatting-that should not influence its correctness. Yet, whether LLM judges can reliably handle these variations remains unclear. We present the first comprehensive study of this issue, defining six types of potential bias in code evaluation and revealing their systematic impact on LLM judges. Across five programming languages and multiple LLMs, we empirically demonstrate that all tested LLM judges are susceptible to both positive and negative biases, resulting in inflated or unfairly low scores. Moreover, we observe that LLM judges remain vulnerable to these biases even when prompted to generate test cases before scoring, highlighting the need for more robust code evaluation methods.
Jiwon Moon、Yerin Hwang、Dongryeol Lee、Taegwan Kang、Yongil Kim、Kyomin Jung
计算技术、计算机技术
Jiwon Moon,Yerin Hwang,Dongryeol Lee,Taegwan Kang,Yongil Kim,Kyomin Jung.Don't Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation[EB/OL].(2025-05-22)[2025-07-16].https://arxiv.org/abs/2505.16222.点此复制
评论