Guided Policy Optimization under Partial Observability
Guided Policy Optimization under Partial Observability
Reinforcement Learning (RL) in partially observable environments poses significant challenges due to the complexity of learning under uncertainty. While additional information, such as that available in simulations, can enhance training, effectively leveraging it remains an open problem. To address this, we introduce Guided Policy Optimization (GPO), a framework that co-trains a guider and a learner. The guider takes advantage of privileged information while ensuring alignment with the learner's policy that is primarily trained via imitation learning. We theoretically demonstrate that this learning scheme achieves optimality comparable to direct RL, thereby overcoming key limitations inherent in existing approaches. Empirical evaluations show strong performance of GPO across various tasks, including continuous control with partial observability and noise, and memory-based challenges, significantly outperforming existing methods.
Yueheng Li、Guangming Xie、Zongqing Lu
自动化基础理论计算技术、计算机技术
Yueheng Li,Guangming Xie,Zongqing Lu.Guided Policy Optimization under Partial Observability[EB/OL].(2025-05-21)[2025-06-10].https://arxiv.org/abs/2505.15418.点此复制
评论