ReferDINO: Referring Video Object Segmentation with Visual Grounding Foundations
ReferDINO: Referring Video Object Segmentation with Visual Grounding Foundations
Referring video object segmentation (RVOS) aims to segment target objects throughout a video based on a text description. This is challenging as it involves deep vision-language understanding, pixel-level dense prediction and spatiotemporal reasoning. Despite notable progress in recent years, existing methods still exhibit a noticeable gap when considering all these aspects. In this work, we propose \textbf{ReferDINO}, a strong RVOS model that inherits region-level vision-language alignment from foundational visual grounding models, and is further endowed with pixel-level dense perception and cross-modal spatiotemporal reasoning. In detail, ReferDINO integrates two key components: 1) a grounding-guided deformable mask decoder that utilizes location prediction to progressively guide mask prediction through differentiable deformation mechanisms; 2) an object-consistent temporal enhancer that injects pretrained time-varying text features into inter-frame interaction to capture object-aware dynamic changes. Moreover, a confidence-aware query pruning strategy is designed to accelerate object decoding without compromising model performance. Extensive experimental results on five benchmarks demonstrate that our ReferDINO significantly outperforms previous methods (e.g., +3.9% (\mathcal{J}&\mathcal{F}) on Ref-YouTube-VOS) with real-time inference speed (51 FPS).
Tianming Liang、Jianguo Zhang、Kun-Yu Lin、Chaolei Tan、Wei-Shi Zheng、Jian-Fang Hu
计算技术、计算机技术
Tianming Liang,Jianguo Zhang,Kun-Yu Lin,Chaolei Tan,Wei-Shi Zheng,Jian-Fang Hu.ReferDINO: Referring Video Object Segmentation with Visual Grounding Foundations[EB/OL].(2025-06-30)[2025-07-16].https://arxiv.org/abs/2501.14607.点此复制
评论