Cross-Modal Consistency Learning for Sign Language Recognition
Cross-Modal Consistency Learning for Sign Language Recognition
Pre-training has been proven to be effective in boosting the performance of Isolated Sign Language Recognition (ISLR). Existing pre-training methods solely focus on the compact pose data, which eliminates background perturbation but inevitably suffers from insufficient semantic cues compared to raw RGB videos. Nevertheless, learning representation directly from RGB videos remains challenging due to the presence of sign-independent visual features. To address this dilemma, we propose a Cross-modal Consistency Learning framework (CCL-SLR), which leverages the cross-modal consistency from both RGB and pose modalities based on self-supervised pre-training. First, CCL-SLR employs contrastive learning for instance discrimination within and across modalities. Through the single-modal and cross-modal contrastive learning, CCL-SLR gradually aligns the feature spaces of RGB and pose modalities, thereby extracting consistent sign representations. Second, we further introduce Motion-Preserving Masking (MPM) and Semantic Positive Mining (SPM) techniques to improve cross-modal consistency from the perspective of data augmentation and sample similarity, respectively. Extensive experiments on four ISLR benchmarks show that CCL-SLR achieves impressive performance, demonstrating its effectiveness. The code will be released to the public.
Houqiang Li、Hezhen Hu、Wengang Zhou、Zecheng Li、Kepeng Wu
计算技术、计算机技术
Houqiang Li,Hezhen Hu,Wengang Zhou,Zecheng Li,Kepeng Wu.Cross-Modal Consistency Learning for Sign Language Recognition[EB/OL].(2025-03-16)[2025-05-03].https://arxiv.org/abs/2503.12485.点此复制
评论