Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput
Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput
The standard paradigm for solving coding tasks via large language models (LLMs) is to generate-then-rank programs, where the latter step uses a verifier in the ranking process. The growing consensus is that a comprehensive verifier (e.g., a full test suite) should be prioritized over an outcome reward model (ORM) whenever possible, with little consideration given to the trade-offs involved. We aim to challenge this assumption by systematically exploring the tradeoff between speed and accuracy. We find that ORMs play a crucial role in scaling verification through trading accuracy for speed, even when a comprehensive verifier is available. Their value becomes especially apparent when used in a generate-prune-then-rank approach, where a faster but less accurate verifier removes incorrect solutions prior to ranking -- leading to a system that is 11.65x faster while only being 8.33% less accurate than the full test suite. We analyze the generate-prune-then-rank approach and show that it works by filtering out incorrect but highly ranked solutions. These findings enable the design of scalable and accurate program ranking systems.
Gabriel Orlanski、Nicholas Roberts、Aws Albarghouthi、Frederic Sala
计算技术、计算机技术
Gabriel Orlanski,Nicholas Roberts,Aws Albarghouthi,Frederic Sala.Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput[EB/OL].(2025-06-11)[2025-06-19].https://arxiv.org/abs/2506.10056.点此复制
评论