|国家预印本平台
首页|Optimal Policy Minimum Bayesian Risk

Optimal Policy Minimum Bayesian Risk

Optimal Policy Minimum Bayesian Risk

来源:Arxiv_logoArxiv
英文摘要

Inference scaling can help LLMs solve complex reasoning problems through extended runtime computation. On top of targeted supervision for long chain-of-thought (long-CoT) generation, purely inference-time techniques such as best-of-N (BoN) sampling, majority voting, or more generally, minimum Bayes risk decoding (MBRD), can further improve LLM accuracy by generating multiple candidate solutions and aggregating over them. These methods typically leverage additional signals in the form of reward models and risk/similarity functions that compare generated samples, e.g., exact match in some normalized space or standard similarity metrics such as Rouge. Here we present a novel method for incorporating reward and risk/similarity signals into MBRD. Based on the concept of optimal policy in KL-controlled reinforcement learning, our framework provides a simple and well-defined mechanism for leveraging such signals, offering several advantages over traditional inference-time methods: higher robustness, improved accuracy, and well-understood asymptotic behavior. In addition, it allows for the development of a sample-efficient variant of MBRD that can adjust the number of samples to generate according to the difficulty of the problem, without relying on majority vote counts. We empirically demonstrate the advantages of our approach on math (MATH-$500$) and coding (HumanEval) tasks using recent open-source models. We also present a comprehensive analysis of its accuracy-compute trade-offs.

Ramón Fernandez Astudillo、Md Arafat Sultan、Aashka Trivedi、Yousef El-Kurdi、Tahira Naseem、Radu Florian、Salim Roukos

计算技术、计算机技术

Ramón Fernandez Astudillo,Md Arafat Sultan,Aashka Trivedi,Yousef El-Kurdi,Tahira Naseem,Radu Florian,Salim Roukos.Optimal Policy Minimum Bayesian Risk[EB/OL].(2025-05-22)[2025-06-27].https://arxiv.org/abs/2505.17242.点此复制

评论