Disentangled Object-Centric Image Representation for Robotic Manipulation
Disentangled Object-Centric Image Representation for Robotic Manipulation
Learning robotic manipulation skills from vision is a promising approach for developing robotics applications that can generalize broadly to real-world scenarios. As such, many approaches to enable this vision have been explored with fruitful results. Particularly, object-centric representation methods have been shown to provide better inductive biases for skill learning, leading to improved performance and generalization. Nonetheless, we show that object-centric methods can struggle to learn simple manipulation skills in multi-object environments. Thus, we propose DOCIR, an object-centric framework that introduces a disentangled representation for objects of interest, obstacles, and robot embodiment. We show that this approach leads to state-of-the-art performance for learning pick and place skills from visual inputs in multi-object environments and generalizes at test time to changing objects of interest and distractors in the scene. Furthermore, we show its efficacy both in simulation and zero-shot transfer to the real world.
Bingbing Wu、Seungsu Kim、Romain Br¨|gier、Jean-Luc Meunier、Denys Proux、Jean-Michel Renders、David Emukpere、Romain Deffayet、Michael Niemaz
自动化技术、自动化技术设备
Bingbing Wu,Seungsu Kim,Romain Br¨|gier,Jean-Luc Meunier,Denys Proux,Jean-Michel Renders,David Emukpere,Romain Deffayet,Michael Niemaz.Disentangled Object-Centric Image Representation for Robotic Manipulation[EB/OL].(2025-03-14)[2025-08-02].https://arxiv.org/abs/2503.11565.点此复制
评论