State-Covering Trajectory Stitching for Diffusion Planners
State-Covering Trajectory Stitching for Diffusion Planners
Diffusion-based generative models are emerging as powerful tools for long-horizon planning in reinforcement learning (RL), particularly with offline datasets. However, their performance is fundamentally limited by the quality and diversity of training data. This often restricts their generalization to tasks outside their training distribution or longer planning horizons. To overcome this challenge, we propose State-Covering Trajectory Stitching (SCoTS), a novel reward-free trajectory augmentation method that incrementally stitches together short trajectory segments, systematically generating diverse and extended trajectories. SCoTS first learns a temporal distance-preserving latent representation that captures the underlying temporal structure of the environment, then iteratively stitches trajectory segments guided by directional exploration and novelty to effectively cover and expand this latent space. We demonstrate that SCoTS significantly improves the performance and generalization capabilities of diffusion planners on offline goal-conditioned benchmarks requiring stitching and long-horizon reasoning. Furthermore, augmented trajectories generated by SCoTS significantly improve the performance of widely used offline goal-conditioned RL algorithms across diverse environments.
Kyowoon Lee、Jaesik Choi
计算技术、计算机技术
Kyowoon Lee,Jaesik Choi.State-Covering Trajectory Stitching for Diffusion Planners[EB/OL].(2025-06-01)[2025-06-22].https://arxiv.org/abs/2506.00895.点此复制
评论