High-Performance Reinforcement Learning on Spot: Optimizing Simulation Parameters with Distributional Measures
High-Performance Reinforcement Learning on Spot: Optimizing Simulation Parameters with Distributional Measures
This work presents an overview of the technical details behind a high performance reinforcement learning policy deployment with the Spot RL Researcher Development Kit for low level motor access on Boston Dynamics Spot. This represents the first public demonstration of an end to end end reinforcement learning policy deployed on Spot hardware with training code publicly available through Nvidia IsaacLab and deployment code available through Boston Dynamics. We utilize Wasserstein Distance and Maximum Mean Discrepancy to quantify the distributional dissimilarity of data collected on hardware and in simulation to measure our sim2real gap. We use these measures as a scoring function for the Covariance Matrix Adaptation Evolution Strategy to optimize simulated parameters that are unknown or difficult to measure from Spot. Our procedure for modeling and training produces high quality reinforcement learning policies capable of multiple gaits, including a flight phase. We deploy policies capable of over 5.2ms locomotion, more than triple Spots default controller maximum speed, robustness to slippery surfaces, disturbance rejection, and overall agility previously unseen on Spot. We detail our method and release our code to support future work on Spot with the low level API.
AJ Miller、Fangzhou Yu、Michael Brauckmann、Farbod Farshidian
自动化技术、自动化技术设备计算技术、计算机技术
AJ Miller,Fangzhou Yu,Michael Brauckmann,Farbod Farshidian.High-Performance Reinforcement Learning on Spot: Optimizing Simulation Parameters with Distributional Measures[EB/OL].(2025-04-24)[2025-05-29].https://arxiv.org/abs/2504.17857.点此复制
评论