LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning
LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning
This paper explores the combination of two intrinsic motivation strategies to improve the efficiency of reinforcement learning (RL) agents in environments with extreme sparse rewards, where traditional learning struggles due to infrequent positive feedback. We propose integrating Variational State as Intrinsic Reward (VSIMR), which uses Variational AutoEncoders (VAEs) to reward state novelty, with an intrinsic reward approach derived from Large Language Models (LLMs). The LLMs leverage their pre-trained knowledge to generate reward signals based on environment and goal descriptions, guiding the agent. We implemented this combined approach with an Actor-Critic (A2C) agent in the MiniGrid DoorKey environment, a benchmark for sparse rewards. Our empirical results show that this combined strategy significantly increases agent performance and sampling efficiency compared to using each strategy individually or a standard A2C agent, which failed to learn. Analysis of learning curves indicates that the combination effectively complements different aspects of the environment and task: VSIMR drives exploration of new states, while the LLM-derived rewards facilitate progressive exploitation towards goals.
André Quadros、Cassio Silva、Ronnie Alves
计算技术、计算机技术
André Quadros,Cassio Silva,Ronnie Alves.LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning[EB/OL].(2025-08-25)[2025-09-05].https://arxiv.org/abs/2508.18420.点此复制
评论