CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation
CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation
Modern software development demands code that is maintainable, testable, and scalable by organizing the implementation into modular components with iterative reuse of existing codes. We formalize this iterative, multi-turn paradigm as codeflow and introduce CodeFlowBench, the first benchmark designed to comprehensively evaluate LLMs' ability to perform codeflow, namely implementing new functionality by reusing existing functions over multiple turns. CodeFlowBench comprises 5,258 problems from Codeforces and is continuously updated via an automated pipeline, which decomposes each problem into subproblems with unit tests based on dependency tree analysis and dataflow analysis. We further propose a novel evaluation framework featured dual assessment protocol and structural metrics derived from dependency trees. Extensive experiments on 16 popular LLMs reveal significant performance degradation in multi-turn scenarios. For instance, o1-mini retains only 20.8% Pass@1 in multi-turn scenario versus 37.8% in single-turn scenario. More fine-grained analysis illustrates that model performance inversely correlates with dependency complexity. These findings not only highlight the critical challenges for supporting real-world workflows, but also establish CodeFlowBench as an essential tool for advancing code generation research.
Sizhe Wang、Zhengren Wang、Dongsheng Ma、Yongan Yu、Rui Ling、Zhiyu Li、Feiyu Xiong、Wentao Zhang
计算技术、计算机技术
Sizhe Wang,Zhengren Wang,Dongsheng Ma,Yongan Yu,Rui Ling,Zhiyu Li,Feiyu Xiong,Wentao Zhang.CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation[EB/OL].(2025-04-30)[2025-07-17].https://arxiv.org/abs/2504.21751.点此复制
评论