|国家预印本平台
首页|Learning Instruction-Following Policies through Open-Ended Instruction Relabeling with Large Language Models

Learning Instruction-Following Policies through Open-Ended Instruction Relabeling with Large Language Models

Learning Instruction-Following Policies through Open-Ended Instruction Relabeling with Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Developing effective instruction-following policies in reinforcement learning remains challenging due to the reliance on extensive human-labeled instruction datasets and the difficulty of learning from sparse rewards. In this paper, we propose a novel approach that leverages the capabilities of large language models (LLMs) to automatically generate open-ended instructions retrospectively from previously collected agent trajectories. Our core idea is to employ LLMs to relabel unsuccessful trajectories by identifying meaningful subtasks the agent has implicitly accomplished, thereby enriching the agent's training data and substantially alleviating reliance on human annotations. Through this open-ended instruction relabeling, we efficiently learn a unified instruction-following policy capable of handling diverse tasks within a single policy. We empirically evaluate our proposed method in the challenging Craftax environment, demonstrating clear improvements in sample efficiency, instruction coverage, and overall policy performance compared to state-of-the-art baselines. Our results highlight the effectiveness of utilizing LLM-guided open-ended instruction relabeling to enhance instruction-following reinforcement learning.

Zhicheng Zhang、Ziyan Wang、Yali Du、Fei Fang

计算技术、计算机技术

Zhicheng Zhang,Ziyan Wang,Yali Du,Fei Fang.Learning Instruction-Following Policies through Open-Ended Instruction Relabeling with Large Language Models[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2506.20061.点此复制

评论