BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models
BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models
Large vision models (LVM) based gait recognition has achieved impressive performance. However, existing LVM-based approaches may overemphasize gait priors while neglecting the intrinsic value of LVM itself, particularly the rich, distinct representations across its multi-layers. To adequately unlock LVM's potential, this work investigates the impact of layer-wise representations on downstream recognition tasks. Our analysis reveals that LVM's intermediate layers offer complementary properties across tasks, integrating them yields an impressive improvement even without rich well-designed gait priors. Building on this insight, we propose a simple and universal baseline for LVM-based gait recognition, termed BiggerGait. Comprehensive evaluations on CCPG, CAISA-B*, SUSTech1K, and CCGR\_MINI validate the superiority of BiggerGait across both within- and cross-domain tasks, establishing it as a simple yet practical baseline for gait representation learning. All the models and code will be publicly available.
Dingqing Ye、Chao Fan、Zhanbo Huang、Chengwen Luo、Jianqiang Li、Shiqi Yu、Xiaoming Liu
计算技术、计算机技术
Dingqing Ye,Chao Fan,Zhanbo Huang,Chengwen Luo,Jianqiang Li,Shiqi Yu,Xiaoming Liu.BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models[EB/OL].(2025-05-23)[2025-06-17].https://arxiv.org/abs/2505.18132.点此复制
评论