|国家预印本平台
| 注册
首页|No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain

No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain

No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain

来源:Arxiv_logoArxiv
英文摘要

Effective bipedal locomotion in dynamic environments, such as cluttered indoor spaces or uneven terrain, requires agile and adaptive movement in all directions. This necessitates omnidirectional terrain sensing and a controller capable of processing such input. We present a learning framework for vision-based omnidirectional bipedal locomotion, enabling seamless movement using depth images. A key challenge is the high computational cost of rendering omnidirectional depth images in simulation, making traditional sim-to-real reinforcement learning (RL) impractical. Our method combines a robust blind controller with a teacher policy that supervises a vision-based student policy, trained on noise-augmented terrain data to avoid rendering costs during RL and ensure robustness. We also introduce a data augmentation technique for supervised student training, accelerating training by up to 10 times compared to conventional methods. Our framework is validated through simulation and real-world tests, demonstrating effective omnidirectional locomotion with minimal reliance on expensive rendering. This is, to the best of our knowledge, the first demonstration of vision-based omnidirectional bipedal locomotion, showcasing its adaptability to diverse terrains.

Mohitvishnu S. Gadde、Pranay Dugar、Ashish Malik、Alan Fern

计算技术、计算机技术自动化技术、自动化技术设备

Mohitvishnu S. Gadde,Pranay Dugar,Ashish Malik,Alan Fern.No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain[EB/OL].(2025-08-16)[2025-09-07].https://arxiv.org/abs/2508.11929.点此复制

评论