SegDAC: Segmentation-Driven Actor-Critic for Visual Reinforcement Learning
SegDAC: Segmentation-Driven Actor-Critic for Visual Reinforcement Learning
Visual reinforcement learning (RL) is challenging due to the need to learn both perception and actions from high-dimensional inputs and noisy rewards. Although large perception models exist, integrating them effectively into RL for visual generalization and improved sample efficiency remains unclear. We propose SegDAC, a Segmentation-Driven Actor-Critic method. SegDAC uses Segment Anything (SAM) for object-centric decomposition and YOLO-World to ground segments semantically via text prompts. It includes a novel transformer-based architecture that supports a dynamic number of segments at each time step and effectively learns which segments to focus on using online RL, without using human labels. By evaluating SegDAC over a challenging visual generalization benchmark using Maniskill3, which covers diverse manipulation tasks under strong visual perturbations, we demonstrate that SegDAC achieves significantly better visual generalization, doubling prior performance on the hardest setting and matching or surpassing prior methods in sample efficiency across all evaluated tasks.
Alexandre Brown、Glen Berseth
计算技术、计算机技术
Alexandre Brown,Glen Berseth.SegDAC: Segmentation-Driven Actor-Critic for Visual Reinforcement Learning[EB/OL].(2025-08-12)[2025-08-24].https://arxiv.org/abs/2508.09325.点此复制
评论