|国家预印本平台
首页|From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation

From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation

From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation

来源:Arxiv_logoArxiv
英文摘要

Current research on long-form context in Large Language Models (LLMs) primarily focuses on the understanding of long-contexts, the Open-ended Long Text Generation (Open-LTG) remains insufficiently explored. Training a long-context generation model requires curation of gold standard reference data, which is typically nonexistent for informative Open-LTG tasks. However, previous methods only utilize general assessments as reward signals, which limits accuracy. To bridge this gap, we introduce ProxyReward, an innovative reinforcement learning (RL) based framework, which includes a dataset and a reward signal computation method. Firstly, ProxyReward Dataset generation is accomplished through simple prompts that enables the model to create automatically, obviating extensive labeled data or significant manual effort. Secondly, ProxyReward Signal offers a targeted evaluation of information comprehensiveness and accuracy for specific questions. The experimental results indicate that our method ProxyReward surpasses even GPT-4-Turbo. It can significantly enhance performance by 20% on the Open-LTG task when training widely used open-source models, while also surpassing the LLM-as-a-Judge approach. Our work presents effective methods to enhance the ability of LLMs to address complex open-ended questions posed by human.

Zhihan Guo、Jiele Wu、Wenqian Cui、Yifei Zhang、Minda Hu、Yufei Wang、Irwin King

计算技术、计算机技术

Zhihan Guo,Jiele Wu,Wenqian Cui,Yifei Zhang,Minda Hu,Yufei Wang,Irwin King.From General to Targeted Rewards: Surpassing GPT-4 in Open-Ended Long-Context Generation[EB/OL].(2025-06-19)[2025-06-30].https://arxiv.org/abs/2506.16024.点此复制

评论