|国家预印本平台
首页|DSCodeBench: A Realistic Benchmark for Data Science Code Generation

DSCodeBench: A Realistic Benchmark for Data Science Code Generation

DSCodeBench: A Realistic Benchmark for Data Science Code Generation

来源:Arxiv_logoArxiv
英文摘要

We introduce DSCodeBench, a new benchmark designed to evaluate large language models (LLMs) on complicated and realistic data science code generation tasks. DSCodeBench consists of 1,000 carefully constructed problems sourced from realistic problems from GitHub across ten widely used Python data science libraries. Compared to the current state-of-the-art benchmark DS-1000, DSCodeBench offers a more challenging and representative testbed, longer code solutions, more comprehensive data science libraries, clearer and better structured problem descriptions, and stronger test suites. To construct the DSCodeBench, we develop a robust pipeline that combines task scope selection, code construction, test case generation, and problem description synthesis. The process is paired with rigorous manual editing to ensure alignment and enhance evaluation reliability. Experimental result shows that DSCodeBench exhibits robust scaling behavior, where larger models systematically outperform smaller ones, validating its ability to distinguish model capabilities. The best LLM we test, GPT-4o, has a pass@1 of 0.202, indicating that LLMs still have a large room to improve for realistic data science code generation tasks. We believe DSCodeBench will serve as a rigorous and trustworthy foundation for advancing LLM-based data science programming.

Jie M. Zhang、Shuyin Ouyang、Dong Huang、Jingwen Guo、Zeyu Sun、Qihao Zhu

计算技术、计算机技术

Jie M. Zhang,Shuyin Ouyang,Dong Huang,Jingwen Guo,Zeyu Sun,Qihao Zhu.DSCodeBench: A Realistic Benchmark for Data Science Code Generation[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2505.15621.点此复制

评论