|国家预印本平台
首页|ACTLLM: Action Consistency Tuned Large Language Model

ACTLLM: Action Consistency Tuned Large Language Model

ACTLLM: Action Consistency Tuned Large Language Model

来源:Arxiv_logoArxiv
英文摘要

This paper introduces ACTLLM (Action Consistency Tuned Large Language Model), a novel approach for robot manipulation in dynamic environments. Traditional vision-based systems often struggle to learn visual representations that excel in both task execution and spatial reasoning, thereby limiting their adaptability in dynamic environments. ACTLLM addresses these challenges by harnessing language to craft structured scene descriptors, providing a uniform interface for both spatial understanding and task performance through flexible language instructions. Moreover, we introduce a novel action consistency constraint that aligns visual perception with corresponding actions, thereby enhancing the learning of actionable visual representations. Additionally, we have reformulated the Markov decision process for manipulation tasks into a multi-turn visual dialogue framework. This approach enables the modeling of long-term task execution with enhanced contextual relevance derived from the history of task execution. During our evaluation, ACTLLM excels in diverse scenarios, proving its effectiveness on challenging vision-based robot manipulation tasks.

Jing Bi、Lianggong Bruce Wen、Zhang Liu、Chenliang Xu

计算技术、计算机技术

Jing Bi,Lianggong Bruce Wen,Zhang Liu,Chenliang Xu.ACTLLM: Action Consistency Tuned Large Language Model[EB/OL].(2025-06-26)[2025-07-17].https://arxiv.org/abs/2506.21250.点此复制

评论