Base-Detail Feature Learning Framework for Visible-Infrared Person Re-Identification
Base-Detail Feature Learning Framework for Visible-Infrared Person Re-Identification
Visible-infrared person re-identification (VIReID) provides a solution for ReID tasks in 24-hour scenarios; however, significant challenges persist in achieving satisfactory performance due to the substantial discrepancies between visible (VIS) and infrared (IR) modalities. Existing methods inadequately leverage information from different modalities, primarily focusing on digging distinguishing features from modality-shared information while neglecting modality-specific details. To fully utilize differentiated minutiae, we propose a Base-Detail Feature Learning Framework (BDLF) that enhances the learning of both base and detail knowledge, thereby capitalizing on both modality-shared and modality-specific information. Specifically, the proposed BDLF mines detail and base features through a lossless detail feature extraction module and a complementary base embedding generation mechanism, respectively, supported by a novel correlation restriction method that ensures the features gained by BDLF enrich both detail and base knowledge across VIS and IR features. Comprehensive experiments conducted on the SYSU-MM01, RegDB, and LLCM datasets validate the effectiveness of BDLF.
Zhihao Gong、Lian Wu、Yong Xu
计算技术、计算机技术
Zhihao Gong,Lian Wu,Yong Xu.Base-Detail Feature Learning Framework for Visible-Infrared Person Re-Identification[EB/OL].(2025-05-06)[2025-05-24].https://arxiv.org/abs/2505.03286.点此复制
评论