|国家预印本平台
首页|LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning

LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning

LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning

来源:Arxiv_logoArxiv
英文摘要

General-purpose humanoid robots are expected to interact intuitively with humans, enabling seamless integration into daily life. Natural language provides the most accessible medium for this purpose. However, translating language into humanoid whole-body motion remains a significant challenge, primarily due to the gap between linguistic understanding and physical actions. In this work, we present an end-to-end, language-directed policy for real-world humanoid whole-body control. Our approach combines reinforcement learning with policy distillation, allowing a single neural network to interpret language commands and execute corresponding physical actions directly. To enhance motion diversity and compositionality, we incorporate a Conditional Variational Autoencoder (CVAE) structure. The resulting policy achieves agile and versatile whole-body behaviors conditioned on language inputs, with smooth transitions between various motions, enabling adaptation to linguistic variations and the emergence of novel motions. We validate the efficacy and generalizability of our method through extensive simulations and real-world experiments, demonstrating robust whole-body control. Please see our website at LangWBC.github.io for more information.

Yiyang Shao、Xiaoyu Huang、Bike Zhang、Qiayuan Liao、Yuman Gao、Yufeng Chi、Zhongyu Li、Sophia Shao、Koushil Sreenath

计算技术、计算机技术

Yiyang Shao,Xiaoyu Huang,Bike Zhang,Qiayuan Liao,Yuman Gao,Yufeng Chi,Zhongyu Li,Sophia Shao,Koushil Sreenath.LangWBC: Language-directed Humanoid Whole-Body Control via End-to-end Learning[EB/OL].(2025-04-30)[2025-05-24].https://arxiv.org/abs/2504.21738.点此复制

评论