Success in Humanoid Reinforcement Learning under Partial Observation
Success in Humanoid Reinforcement Learning under Partial Observation
Reinforcement learning has been widely applied to robotic control, but effective policy learning under partial observability remains a major challenge, especially in high-dimensional tasks like humanoid locomotion. To date, no prior work has demonstrated stable training of humanoid policies with incomplete state information in the benchmark Gymnasium Humanoid-v4 environment. The objective in this environment is to walk forward as fast as possible without falling, with rewards provided for staying upright and moving forward, and penalties incurred for excessive actions and external contact forces. This research presents the first successful instance of learning under partial observability in this environment. The learned policy achieves performance comparable to state-of-the-art results with full state access, despite using only one-third to two-thirds of the original states. Moreover, the policy exhibits adaptability to robot properties, such as variations in body part masses. The key to this success is a novel history encoder that processes a fixed-length sequence of past observations in parallel. Integrated into a standard model-free algorithm, the encoder enables performance on par with fully observed baselines. We hypothesize that it reconstructs essential contextual information from recent observations, thereby enabling robust decision-making.
Wuhao Wang、Zhiyong Chen
自动化基础理论计算技术、计算机技术自动化技术、自动化技术设备
Wuhao Wang,Zhiyong Chen.Success in Humanoid Reinforcement Learning under Partial Observation[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.18883.点此复制
评论