|国家预印本平台
首页|PBE Meets LLM: When Few Examples Aren't Few-Shot Enough

PBE Meets LLM: When Few Examples Aren't Few-Shot Enough

PBE Meets LLM: When Few Examples Aren't Few-Shot Enough

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) can generate code from natural language descriptions. Their performance is typically evaluated using programming benchmarks that simulate real-world tasks. These benchmarks provide specifications in the form of docstrings, function signatures, or bug reports. The model then generates a program, which is tested against predefined test cases. In contrast, Programming by Example (PBE) uses input-output examples as the specification. Traditional PBE systems rely on search-based methods over restricted transformation spaces. They are usually designed for narrow domains and fixed input formats. It remains unclear how well LLMs perform on PBE tasks. In this work, we evaluate LLMs on PBE tasks involving tabular data transformations. We prompt models to generate functions that convert an input table to an output table. We test the generated functions on unseen inputs to measure accuracy. Our study includes multiple LLMs and evaluates different prompting strategies, such as one-shot vs. multi-try. We also compare performance with and without PBE-specific knowledge. Finally, we propose a hybrid method that calls a traditional PBE solver first, and then falls back to LLMs if necessary. Our results show that LLMs support more diverse input formats and achieve higher accuracy than conventional methods. However, they struggle with tasks that contain ambiguity. The hybrid approach improves overall success by combining the strengths of both approaches.

Shuning Zhang、Yongjoo Park

计算技术、计算机技术

Shuning Zhang,Yongjoo Park.PBE Meets LLM: When Few Examples Aren't Few-Shot Enough[EB/OL].(2025-07-07)[2025-07-19].https://arxiv.org/abs/2507.05403.点此复制

评论