|国家预印本平台
首页|A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation

A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation

A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation

来源:Arxiv_logoArxiv
英文摘要

Contemporary Video Instance Segmentation (VIS) methods typically adhere to a pre-train then fine-tune regime, where a segmentation model trained on images is fine-tuned on videos. However, the lack of temporal knowledge in the pre-trained model introduces a domain gap which may adversely affect the VIS performance. To effectively bridge this gap, we present a novel video pre-training approach to enhance VIS models, especially for videos with intricate instance relationships. Our crucial innovation focuses on reducing disparities between the pre-training and fine-tuning stages. Specifically, we first introduce consistent pseudo-video augmentations to create diverse pseudo-video samples for pre-training while maintaining the instance consistency across frames. Then, we incorporate a multi-scale temporal module to enhance the model's ability to model temporal relations through self- and cross-attention at short- and long-term temporal spans. Our approach does not set constraints on model architecture and can integrate seamlessly with various VIS methods. Experiment results on commonly adopted VIS benchmarks show that our method consistently outperforms state-of-the-art methods. Our approach achieves a notable 4.0% increase in average precision on the challenging OVIS dataset.

Qing Zhong、Peng-Tao Jiang、Wen Wang、Guodong Ding、Lin Wu、Kaiqi Huang

计算技术、计算机技术

Qing Zhong,Peng-Tao Jiang,Wen Wang,Guodong Ding,Lin Wu,Kaiqi Huang.A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation[EB/OL].(2025-03-22)[2025-08-02].https://arxiv.org/abs/2503.17672.点此复制

评论