|国家预印本平台
首页|GP and LLMs for Program Synthesis: No Clear Winners

GP and LLMs for Program Synthesis: No Clear Winners

GP and LLMs for Program Synthesis: No Clear Winners

来源:Arxiv_logoArxiv
英文摘要

Genetic programming (GP) and large language models (LLMs) differ in how program specifications are provided: GP uses input-output examples, and LLMs use text descriptions. In this work, we compared the ability of PushGP and GPT-4o to synthesize computer programs for tasks from the PSB2 benchmark suite. We used three prompt variants with GPT-4o: input-output examples (data-only), textual description of the task (text-only), and a combination of both textual descriptions and input-output examples (data-text). Additionally, we varied the number of input-output examples available for building programs. For each synthesizer and task combination, we compared success rates across all program synthesizers, as well as the similarity between successful GPT-4o synthesized programs. We found that the combination of PushGP and GPT-4o with data-text prompting led to the greatest number of tasks solved (23 of the 25 tasks), even though several tasks were solved exclusively by only one of the two synthesizers. We also observed that PushGP and GPT-4o with data-only prompting solved fewer tasks with the decrease in the training set size, while the remaining synthesizers saw no decrease. We also detected significant differences in similarity between the successful programs synthesized for GPT-4o with text-only and data-only prompting. With there being no dominant program synthesizer, this work highlights the importance of different optimization techniques used by PushGP and LLMs to synthesize programs.

Jose Guadalupe Hernandez、Anil Kumar Saini、Gabriel Ketron、Jason H. Moore

计算技术、计算机技术

Jose Guadalupe Hernandez,Anil Kumar Saini,Gabriel Ketron,Jason H. Moore.GP and LLMs for Program Synthesis: No Clear Winners[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2508.03966.点此复制

评论