CORE: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks
CORE: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks
Large language models (LLMs) have been widely adopted across diverse software engineering domains, such as code generation, program repair, and vulnerability detection. These applications require understanding beyond surface-level code patterns: value propagation, control flow, and interdependence between program elements. However, existing benchmarks primarily evaluate end-to-end outcomes, such as whether code is correctly repaired or generated, leaving the models ability for program semantic reasoning underexplored. This work presents CoRe, a high-quality, human-verified benchmark designed to evaluate LLMs on fundamental static analysis tasks. CoRe includes 12,553 task instances spanning data dependency, control dependency, and information flow across programs written in C/C++, Java, and Python. To ensure semantic diversity and reasoning complexity, we propose a semantics-aware diverse sampling strategy that selects targets and task instances based on structural coverage and dependency depth. We evaluate 10 mainstream LLMs and show that, while they perform well at identifying dependencies, models still struggle with tasks that require deeper semantic understanding and multi-step reasoning. We further conduct qualitative analyses to uncover key challenges, such as complex control structures and backward dependency patterns, offering insights into improving LLMs code reasoning capabilities.
Danning Xie、Mingwei Zheng、Xuwei Liu、Jiannan Wang、Chengpeng Wang、Lin Tan、Xiangyu Zhang
计算技术、计算机技术
Danning Xie,Mingwei Zheng,Xuwei Liu,Jiannan Wang,Chengpeng Wang,Lin Tan,Xiangyu Zhang.CORE: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks[EB/OL].(2025-07-03)[2025-07-25].https://arxiv.org/abs/2507.05269.点此复制
评论