|国家预印本平台
首页|VERINA: Benchmarking Verifiable Code Generation

VERINA: Benchmarking Verifiable Code Generation

VERINA: Benchmarking Verifiable Code Generation

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) are increasingly integrated in software development, but ensuring correctness in LLM-generated code remains challenging and often requires costly manual review. Verifiable code generation -- jointly generating code, specifications, and proofs of code-specification alignment -- offers a promising path to address this limitation and further unleash LLMs' benefits in coding. Yet, there exists a significant gap in evaluation: current benchmarks often lack support for end-to-end verifiable code generation. In this paper, we introduce Verina (Verifiable Code Generation Arena), a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as their compositions. Verina consists of 189 manually curated coding tasks in Lean, with detailed problem descriptions, reference implementations, formal specifications, and extensive test suites. Our extensive evaluation of state-of-the-art LLMs reveals significant challenges in verifiable code generation, especially in proof generation, underscoring the need for improving LLM-based theorem provers in verification domains. The best model, OpenAI o4-mini, generates only 61.4% correct code, 51.0% sound and complete specifications, and 3.6% successful proofs, with one trial per task. We hope Verina will catalyze progress in verifiable code generation by providing a rigorous and comprehensive benchmark. We release our dataset on https://huggingface.co/datasets/sunblaze-ucb/verina and our evaluation code on https://github.com/sunblaze-ucb/verina.

Zhe Ye、Zhengxu Yan、Jingxuan He、Timothe Kasriel、Kaiyu Yang、Dawn Song

计算技术、计算机技术

Zhe Ye,Zhengxu Yan,Jingxuan He,Timothe Kasriel,Kaiyu Yang,Dawn Song.VERINA: Benchmarking Verifiable Code Generation[EB/OL].(2025-05-29)[2025-06-14].https://arxiv.org/abs/2505.23135.点此复制

评论