Integrating Vision Foundation Models with Reinforcement Learning for Enhanced Object Interaction
Integrating Vision Foundation Models with Reinforcement Learning for Enhanced Object Interaction
This paper presents a novel approach that integrates vision foundation models with reinforcement learning to enhance object interaction capabilities in simulated environments. By combining the Segment Anything Model (SAM) and YOLOv5 with a Proximal Policy Optimization (PPO) agent operating in the AI2-THOR simulation environment, we enable the agent to perceive and interact with objects more effectively. Our comprehensive experiments, conducted across four diverse indoor kitchen settings, demonstrate significant improvements in object interaction success rates and navigation efficiency compared to a baseline agent without advanced perception. The results show a 68% increase in average cumulative reward, a 52.5% improvement in object interaction success rate, and a 33% increase in navigation efficiency. These findings highlight the potential of integrating foundation models with reinforcement learning for complex robotic tasks, paving the way for more sophisticated and capable autonomous agents.
Ahmad Farooq、Kamran Iqbal
计算技术、计算机技术
Ahmad Farooq,Kamran Iqbal.Integrating Vision Foundation Models with Reinforcement Learning for Enhanced Object Interaction[EB/OL].(2025-08-07)[2025-08-24].https://arxiv.org/abs/2508.05838.点此复制
评论