|国家预印本平台
首页|LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

来源:Arxiv_logoArxiv
英文摘要

Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce LongCodeBench (LCB), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (LongCodeQA) and bug fixing (LongSWE-Bench) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5.

Stefano Rando、Luca Romani、Alessio Sampieri、Luca Franco、John Yang、Yuta Kyuragi、Fabio Galasso、Tatsunori Hashimoto

计算技术、计算机技术

Stefano Rando,Luca Romani,Alessio Sampieri,Luca Franco,John Yang,Yuta Kyuragi,Fabio Galasso,Tatsunori Hashimoto.LongCodeBench: Evaluating Coding LLMs at 1M Context Windows[EB/OL].(2025-05-12)[2025-06-19].https://arxiv.org/abs/2505.07897.点此复制

评论