|国家预印本平台
首页|Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations

Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations

Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations

来源:Arxiv_logoArxiv
英文摘要

Offline imitation learning typically learns from expert and unlabeled demonstrations, yet often overlooks the valuable signal in explicitly undesirable behaviors. In this work, we study offline imitation learning from contrasting behaviors, where the dataset contains both expert and undesirable demonstrations. We propose a novel formulation that optimizes a difference of KL divergences over the state-action visitation distributions of expert and undesirable (or bad) data. Although the resulting objective is a DC (Difference-of-Convex) program, we prove that it becomes convex when expert demonstrations outweigh undesirable demonstrations, enabling a practical and stable non-adversarial training objective. Our method avoids adversarial training and handles both positive and negative demonstrations in a unified framework. Extensive experiments on standard offline imitation learning benchmarks demonstrate that our approach consistently outperforms state-of-the-art baselines.

Huy Hoang、Tien Mai、Pradeep Varakantham、Tanvi Verma

计算技术、计算机技术

Huy Hoang,Tien Mai,Pradeep Varakantham,Tanvi Verma.Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations[EB/OL].(2025-05-27)[2025-06-15].https://arxiv.org/abs/2505.21182.点此复制

评论