|国家预印本平台
首页|TeViR: Text-to-Video Reward with Diffusion Models for Efficient Reinforcement Learning

TeViR: Text-to-Video Reward with Diffusion Models for Efficient Reinforcement Learning

TeViR: Text-to-Video Reward with Diffusion Models for Efficient Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Developing scalable and generalizable reward engineering for reinforcement learning (RL) is crucial for creating general-purpose agents, especially in the challenging domain of robotic manipulation. While recent advances in reward engineering with Vision-Language Models (VLMs) have shown promise, their sparse reward nature significantly limits sample efficiency. This paper introduces TeViR, a novel method that leverages a pre-trained text-to-video diffusion model to generate dense rewards by comparing the predicted image sequence with current observations. Experimental results across 11 complex robotic tasks demonstrate that TeViR outperforms traditional methods leveraging sparse rewards and other state-of-the-art (SOTA) methods, achieving better sample efficiency and performance without ground truth environmental rewards. TeViR's ability to efficiently guide agents in complex environments highlights its potential to advance reinforcement learning applications in robotic manipulation.

Dongbin Zhao、Yuhui Chen、Haoran Li、Zhennan Jiang、Haowei Wen

计算技术、计算机技术

Dongbin Zhao,Yuhui Chen,Haoran Li,Zhennan Jiang,Haowei Wen.TeViR: Text-to-Video Reward with Diffusion Models for Efficient Reinforcement Learning[EB/OL].(2025-06-24)[2025-07-23].https://arxiv.org/abs/2505.19769.点此复制

评论