|国家预印本平台
首页|ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming

ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming

ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming

来源:Arxiv_logoArxiv
英文摘要

While recent research increasingly emphasizes the value of human-LLM collaboration in competitive programming and proposes numerous empirical methods, a comprehensive understanding remains elusive due to the fragmented nature of existing studies and their use of diverse, application-specific human feedback. Thus, our work serves a three-fold purpose: First, we present the first taxonomy of human feedback consolidating the entire programming process, which promotes fine-grained evaluation. Second, we introduce ELABORATIONSET, a novel programming dataset specifically designed for human-LLM collaboration, meticulously annotated to enable large-scale simulated human feedback and facilitate costeffective real human interaction studies. Third, we introduce ELABORATION, a novel benchmark to facilitate a thorough assessment of human-LLM competitive programming. With ELABORATION, we pinpoint strengthes and weaknesses of existing methods, thereby setting the foundation for future improvement. Our code and dataset are available at https://github.com/SCUNLP/ELABORATION

Xinwei Yang、Zhaofeng Liu、Chen Huang、Jiashuai Zhang、Tong Zhang、Yifan Zhang、Wenqiang Lei

计算技术、计算机技术

Xinwei Yang,Zhaofeng Liu,Chen Huang,Jiashuai Zhang,Tong Zhang,Yifan Zhang,Wenqiang Lei.ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming[EB/OL].(2025-05-22)[2025-06-13].https://arxiv.org/abs/2505.16667.点此复制

评论