SHuBERT: Self-Supervised Sign Language Representation Learning via Multi-Stream Cluster Prediction
SHuBERT: Self-Supervised Sign Language Representation Learning via Multi-Stream Cluster Prediction
Sign language processing has traditionally relied on task-specific models, limiting the potential for transfer learning across tasks. Pre-training methods for sign language have typically focused on either supervised pre-training, which cannot take advantage of unlabeled data, or context-independent (frame or video segment) representations, which ignore the effects of relationships across time in sign language. We introduce SHuBERT (Sign Hidden-Unit BERT), a self-supervised contextual representation model learned from approximately 1,000 hours of American Sign Language video. SHuBERT adapts masked token prediction objectives to multi-stream visual sign language input, learning to predict multiple targets corresponding to clustered hand, face, and body pose streams. SHuBERT achieves state-of-the-art performance across multiple tasks including sign language translation, isolated sign language recognition, and fingerspelling detection.
Shester Gueuwou、Xiaodan Du、Greg Shakhnarovich、Karen Livescu、Alexander H. Liu
语言学
Shester Gueuwou,Xiaodan Du,Greg Shakhnarovich,Karen Livescu,Alexander H. Liu.SHuBERT: Self-Supervised Sign Language Representation Learning via Multi-Stream Cluster Prediction[EB/OL].(2025-07-02)[2025-07-16].https://arxiv.org/abs/2411.16765.点此复制
评论