|国家预印本平台
首页|QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges

QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges

QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges

来源:Arxiv_logoArxiv
英文摘要

Recent advances in Large Language Models (LLMs) have demonstrated strong potential in code generation, yet their effectiveness in quantum computing remains underexplored. This paper benchmarks LLMs for PennyLane-based quantum code generation using real-world challenges from the Quantum Hackathon (QHack). We introduce QHackBench, a novel benchmark dataset derived from QHack competitions, and evaluate model performance under vanilla prompting and Retrieval-Augmented Generation (RAG). Our structured evaluation framework assesses functional correctness, syntactic validity, and execution success across varying challenge difficulties. Results indicate that RAG-enhanced models, supplemented with an augmented PennyLane dataset, approximately generate similar results as the standard prompting, particularly in complex quantum algorithms. Additionally, we introduce a multi-agent evaluation pipeline that iteratively refines incorrect solutions, further enhancing execution success rates. To foster further research, we commit to publicly releasing QHackBench, along with our evaluation framework and experimental results, enabling continued advancements in AI-assisted quantum programming.

Abdul Basit、Minghao Shao、Haider Asif、Nouhaila Innan、Muhammad Kashif、Alberto Marchisio、Muhammad Shafique

计算技术、计算机技术

Abdul Basit,Minghao Shao,Haider Asif,Nouhaila Innan,Muhammad Kashif,Alberto Marchisio,Muhammad Shafique.QHackBench: Benchmarking Large Language Models for Quantum Code Generation Using PennyLane Hackathon Challenges[EB/OL].(2025-06-24)[2025-07-09].https://arxiv.org/abs/2506.20008.点此复制

评论