Learning and Transferring Better with Depth Information in Visual Reinforcement Learning
Learning and Transferring Better with Depth Information in Visual Reinforcement Learning
Depth information is robust to scene appearance variations and inherently carries 3D spatial details. In this paper, a visual backbone based on the vision transformer is proposed to fuse RGB and depth modalities for enhancing generalization. Different modalities are first processed by separate CNN stems, and the combined convolutional features are delivered to the scalable vision transformer to obtain visual representations. Moreover, a contrastive unsupervised learning scheme is designed with masked and unmasked tokens to accelerate the sample efficiency during the reinforcement learning progress. For sim2real transfer, a flexible curriculum learning schedule is developed to deploy domain randomization over training processes.
Zichun Xu、Yuntao Li、Zhaomin Wang、Lei Zhuang、Guocai Yang、Jingdong Zhao
计算技术、计算机技术
Zichun Xu,Yuntao Li,Zhaomin Wang,Lei Zhuang,Guocai Yang,Jingdong Zhao.Learning and Transferring Better with Depth Information in Visual Reinforcement Learning[EB/OL].(2025-07-15)[2025-07-22].https://arxiv.org/abs/2507.09180.点此复制
评论