|国家预印本平台
首页|BLEUBERI: BLEU is a surprisingly effective reward for instruction following

BLEUBERI: BLEU is a surprisingly effective reward for instruction following

BLEUBERI: BLEU is a surprisingly effective reward for instruction following

来源:Arxiv_logoArxiv
英文摘要

Reward models are central to aligning LLMs with human preferences, but they are costly to train, requiring large-scale human-labeled preference data and powerful pretrained LLM backbones. Meanwhile, the increasing availability of high-quality synthetic instruction-following datasets raises the question: can simpler, reference-based metrics serve as viable alternatives to reward models during RL-based alignment? In this paper, we show first that BLEU, a basic string-matching metric, surprisingly matches strong reward models in agreement with human preferences on general instruction-following datasets. Based on this insight, we develop BLEUBERI, a method that first identifies challenging instructions and then applies Group Relative Policy Optimization (GRPO) using BLEU directly as the reward function. We demonstrate that BLEUBERI-trained models are competitive with models trained via reward model-guided RL across four challenging instruction-following benchmarks and three different base language models. A human evaluation further supports that the quality of BLEUBERI model outputs is on par with those from reward model-aligned models. Moreover, BLEUBERI models generate outputs that are more factually grounded than competing methods. Overall, we show that given access to high-quality reference outputs (easily obtained via existing instruction-following datasets or synthetic data generation), string matching-based metrics are cheap yet effective proxies for reward models during alignment. We release our code and data at https://github.com/lilakk/BLEUBERI.

Yapei Chang、Yekyung Kim、Michael Krumdick、Amir Zadeh、Chuan Li、Chris Tanner、Mohit Iyyer

计算技术、计算机技术

Yapei Chang,Yekyung Kim,Michael Krumdick,Amir Zadeh,Chuan Li,Chris Tanner,Mohit Iyyer.BLEUBERI: BLEU is a surprisingly effective reward for instruction following[EB/OL].(2025-05-16)[2025-06-23].https://arxiv.org/abs/2505.11080.点此复制

评论