|国家预印本平台
首页|Visual and Memory Dual Adapter for Multi-Modal Object Tracking

Visual and Memory Dual Adapter for Multi-Modal Object Tracking

Visual and Memory Dual Adapter for Multi-Modal Object Tracking

来源:Arxiv_logoArxiv
英文摘要

Prompt-learning-based multi-modal trackers have achieved promising progress by employing lightweight visual adapters to incorporate auxiliary modality features into frozen foundation models. However, existing approaches often struggle to learn reliable prompts due to limited exploitation of critical cues across frequency and temporal domains. In this paper, we propose a novel visual and memory dual adapter (VMDA) to construct more robust and discriminative representations for multi-modal tracking. Specifically, we develop a simple but effective visual adapter that adaptively transfers discriminative cues from auxiliary modality to dominant modality by jointly modeling the frequency, spatial, and channel-wise features. Additionally, we design the memory adapter inspired by the human memory mechanism, which stores global temporal cues and performs dynamic update and retrieval operations to ensure the consistent propagation of reliable temporal information across video sequences. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the various multi-modal tracking tasks, including RGB-Thermal, RGB-Depth, and RGB-Event tracking. Code and models are available at https://github.com/xuboyue1999/mmtrack.git.

Boyue Xu、Ruichao Hou、Tongwei Ren、Gangshan Wu

计算技术、计算机技术

Boyue Xu,Ruichao Hou,Tongwei Ren,Gangshan Wu.Visual and Memory Dual Adapter for Multi-Modal Object Tracking[EB/OL].(2025-06-30)[2025-07-16].https://arxiv.org/abs/2506.23972.点此复制

评论