|国家预印本平台
首页|DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models

DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models

DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models

来源:Arxiv_logoArxiv
英文摘要

The recent explosive interest in the reasoning capabilities of large language models, such as DeepSeek-R1, has demonstrated remarkable success through reinforcement learning-based fine-tuning frameworks, exemplified by methods like Group Relative Policy Optimization (GRPO). However, such reasoning abilities remain underexplored and notably absent in vision foundation models, including representation models like the DINO series. In this work, we propose \textbf{DINO-R1}, the first such attempt to incentivize visual in-context reasoning capabilities of vision foundation models using reinforcement learning. Specifically, DINO-R1 introduces \textbf{Group Relative Query Optimization (GRQO)}, a novel reinforcement-style training strategy explicitly designed for query-based representation models, which computes query-level rewards based on group-normalized alignment quality. We also apply KL-regularization to stabilize the objectness distribution to reduce the training instability. This joint optimization enables dense and expressive supervision across queries while mitigating overfitting and distributional drift. Building upon Grounding-DINO, we train a series of DINO-R1 family models that integrate a visual prompt encoder and a visual-guided query selection mechanism. Extensive experiments on COCO, LVIS, and ODinW demonstrate that DINO-R1 significantly outperforms supervised fine-tuning baselines, achieving strong generalization in both open-vocabulary and closed-set visual prompting scenarios.

Chenbin Pan、Wenbin He、Zhengzhong Tu、Liu Ren

计算技术、计算机技术

Chenbin Pan,Wenbin He,Zhengzhong Tu,Liu Ren.DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models[EB/OL].(2025-05-29)[2025-06-28].https://arxiv.org/abs/2505.24025.点此复制

评论