|国家预印本平台
首页|Symbolically-Guided Visual Plan Inference from Uncurated Video Data

Symbolically-Guided Visual Plan Inference from Uncurated Video Data

Symbolically-Guided Visual Plan Inference from Uncurated Video Data

来源:Arxiv_logoArxiv
英文摘要

Visual planning, by offering a sequence of intermediate visual subgoals to a goal-conditioned low-level policy, achieves promising performance on long-horizon manipulation tasks. To obtain the subgoals, existing methods typically resort to video generation models but suffer from model hallucination and computational cost. We present Vis2Plan, an efficient, explainable and white-box visual planning framework powered by symbolic guidance. From raw, unlabeled play data, Vis2Plan harnesses vision foundation models to automatically extract a compact set of task symbols, which allows building a high-level symbolic transition graph for multi-goal, multi-stage planning. At test time, given a desired task goal, our planner conducts planning at the symbolic level and assembles a sequence of physically consistent intermediate sub-goal images grounded by the underlying symbolic representation. Our Vis2Plan outperforms strong diffusion video generation-based visual planners by delivering 53\% higher aggregate success rate in real robot settings while generating visual plans 35$\times$ faster. The results indicate that Vis2Plan is able to generate physically consistent image goals while offering fully inspectable reasoning steps.

Ahmet Tikna、Yi Zhao、Yuying Zhang、Luigi Palopoli、Marco Roveri、Joni Pajarinen、Wenyan Yang

计算技术、计算机技术

Ahmet Tikna,Yi Zhao,Yuying Zhang,Luigi Palopoli,Marco Roveri,Joni Pajarinen,Wenyan Yang.Symbolically-Guided Visual Plan Inference from Uncurated Video Data[EB/OL].(2025-05-13)[2025-07-02].https://arxiv.org/abs/2505.08444.点此复制

评论