|国家预印本平台
首页|Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions

Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions

Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions

来源:Arxiv_logoArxiv
英文摘要

Model-based reinforcement learning (MBRL) has been used to efficiently solve vision-based control tasks in highdimensional image observations. Although recent MBRL algorithms perform well in trained observations, they fail when faced with visual distractions in observations. These task-irrelevant distractions (e.g., clouds, shadows, and light) may be constantly present in real-world scenarios. In this study, we propose a novel self-supervised method, Dream to Generalize (Dr. G), for zero-shot MBRL. Dr. G trains its encoder and world model with dual contrastive learning which efficiently captures task-relevant features among multi-view data augmentations. We also introduce a recurrent state inverse dynamics model that helps the world model to better understand the temporal structure. The proposed methods can enhance the robustness of the world model against visual distractions. To evaluate the generalization performance, we first train Dr. G on simple backgrounds and then test it on complex natural video backgrounds in the DeepMind Control suite, and the randomizing environments in Robosuite. Dr. G yields a performance improvement of 117% and 14% over prior works, respectively. Our code is open-sourced and available at https://github.com/JeongsooHa/DrG.git

Jeongsoo Ha、Kyungsoo Kim、Yusung Kim

计算技术、计算机技术

Jeongsoo Ha,Kyungsoo Kim,Yusung Kim.Dream to Generalize: Zero-Shot Model-Based Reinforcement Learning for Unseen Visual Distractions[EB/OL].(2025-06-04)[2025-07-21].https://arxiv.org/abs/2506.05419.点此复制

评论