|国家预印本平台
| 注册
首页|LaST-R1: Reinforcing Action via Adaptive Physical Latent Reasoning for VLA Models

LaST-R1: Reinforcing Action via Adaptive Physical Latent Reasoning for VLA Models

Siyuan Qian Yinxi Wang Peng Jia Chi-Wing Fu Zhonghao Yan Nuowei Han Renrui Zhang Chenyang Gu Jialin Gao Ziyu Guo Shanghang Zhang Pheng-Ann Heng Hao Chen Jiaming Liu

Arxiv_logoArxiv

LaST-R1: Reinforcing Action via Adaptive Physical Latent Reasoning for VLA Models

Siyuan Qian Yinxi Wang Peng Jia Chi-Wing Fu Zhonghao Yan Nuowei Han Renrui Zhang Chenyang Gu Jialin Gao Ziyu Guo Shanghang Zhang Pheng-Ann Heng Hao Chen Jiaming Liu

作者信息

Abstract

Vision-Language-Action (VLA) models have increasingly incorporated reasoning mechanisms for complex robotic manipulation. However, existing approaches share a critical limitation: whether employing explicit linguistic reasoning that suffers from latency and discretization, or utilizing more expressive continuous latent reasoning, they are predominantly confined to static imitation learning that limits adaptability and generalization. While online reinforcement learning (RL) has been introduced to VLAs to enable trial-and-error exploration, current methods exclusively optimize the vanilla action space, bypassing the underlying physical reasoning process. In this paper, we present \textbf{LaST-R1}, a unified VLA framework that integrates latent Chain-of-Thought (CoT) reasoning over physical dynamics prior to action execution, along with a tailored RL post-training paradigm. Specifically, we propose \textbf{Latent-to-Action Policy Optimization (LAPO)}, a novel RL algorithm that jointly optimizes the latent reasoning process and the action generation. By bridging reasoning and control, LAPO improves the representation of physical world modeling and enhances robustness in interactive environments. Furthermore, an \textbf{adaptive latent CoT mechanism} is introduced to allow the policy to dynamically adjust its reasoning horizon based on environment complexity. Extensive experiments show that LaST-R1 achieves a near-perfect 99.8\% average success rate on the LIBERO benchmark with only one-shot supervised warm-up, significantly improving convergence speed and performance over prior state-of-the-art methods. In real-world deployments, LAPO post-training yields up to a 44\% improvement over the initial warm-up policy across four complex tasks, including both single-arm and dual-arm settings. Finally, LaST-R1 demonstrates strong generalization across simulated and real-world environments.

引用本文复制引用

Siyuan Qian,Yinxi Wang,Peng Jia,Chi-Wing Fu,Zhonghao Yan,Nuowei Han,Renrui Zhang,Chenyang Gu,Jialin Gao,Ziyu Guo,Shanghang Zhang,Pheng-Ann Heng,Hao Chen,Jiaming Liu.LaST-R1: Reinforcing Action via Adaptive Physical Latent Reasoning for VLA Models[EB/OL].(2026-04-30)[2026-05-02].https://arxiv.org/abs/2604.28192.

学科分类

计算技术、计算机技术

评论

首发时间 2026-04-30
下载量:0
|
点击量:6
段落导航相关论文