|国家预印本平台
首页|Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks

Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks

Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks

来源:Arxiv_logoArxiv
英文摘要

Goal-conditioned (GC) policy learning often faces a challenge arising from the sparsity of rewards, when confronting long-horizon goals. To address the challenge, we explore skill-based GC policy learning in offline settings, where skills are acquired from existing data and long-horizon goals are decomposed into sequences of near-term goals that align with these skills. Specifically, we present an `offline GC policy learning via skill-step abstraction' framework (GLvSA) tailored for tackling long-horizon GC tasks affected by goal distribution shifts. In the framework, a GC policy is progressively learned offline in conjunction with the incremental modeling of skill-step abstractions on the data. We also devise a GC policy hierarchy that not only accelerates GC policy learning within the framework but also allows for parameter-efficient fine-tuning of the policy. Through experiments with the maze and Franka kitchen environments, we demonstrate the superiority and efficiency of our GLvSA framework in adapting GC policies to a wide range of long-horizon goals. The framework achieves competitive zero-shot and few-shot adaptation performance, outperforming existing GC policy learning and skill-based methods.

Minjong Yoo、Donghoon Kim、Honguk Woo

计算技术、计算机技术

Minjong Yoo,Donghoon Kim,Honguk Woo.Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks[EB/OL].(2024-08-20)[2025-08-02].https://arxiv.org/abs/2408.11300.点此复制

评论