|国家预印本平台
首页|Dr. Boot: Bootstrapping Program Synthesis Language Models to Perform Repairing

Dr. Boot: Bootstrapping Program Synthesis Language Models to Perform Repairing

Dr. Boot: Bootstrapping Program Synthesis Language Models to Perform Repairing

来源:Arxiv_logoArxiv
英文摘要

Language models for program synthesis are usually trained and evaluated on programming competition datasets (MBPP, APPS). However, these datasets are limited in size and quality, while these language models are extremely data hungry. Additionally, the language models have a misaligned program synthesis process compared to humans. While humans iteratively develop code with the help of a compiler, most program synthesis models currently produce code in one go. To solve these issues, we introduce a bootstrapping algorithm for program synthesis, that supports teaching models how to repair. We show that bootstrapping consistently outperforms regular fine-tuning. Compared to other work, our bootstrapped model performs on par with fine-tuned models that are 68\% larger. Notably, bootstrapping with repairing also improves non-repairing performance compared to regular bootstrapping during inference. However, on our models, repairing during inference is likely inferior to simply sampling the same number of solutions. Furthermore, we find that there are issues with the example test cases in the training portion of the APPS dataset that are valuable to the community, as many repairing and reinforcement learning methods rely on them.

Noah van der Vleuten

计算技术、计算机技术

Noah van der Vleuten.Dr. Boot: Bootstrapping Program Synthesis Language Models to Perform Repairing[EB/OL].(2025-07-20)[2025-08-10].https://arxiv.org/abs/2507.15889.点此复制

评论