SAGE: Segment-Aware Gloss-Free Encoding for Token-Efficient Sign Language Translation
SAGE: Segment-Aware Gloss-Free Encoding for Token-Efficient Sign Language Translation
Gloss-free Sign Language Translation (SLT) has advanced rapidly, achieving strong performances without relying on gloss annotations. However, these gains have often come with increased model complexity and high computational demands, raising concerns about scalability, especially as large-scale sign language datasets become more common. We propose a segment-aware visual tokenization framework that leverages sign segmentation to convert continuous video into discrete, sign-informed visual tokens. This reduces input sequence length by up to 50% compared to prior methods, resulting in up to 2.67x lower memory usage and better scalability on larger datasets. To bridge the visual and linguistic modalities, we introduce a token-to-token contrastive alignment objective, along with a dual-level supervision that aligns both language embeddings and intermediate hidden states. This improves fine-grained cross-modal alignment without relying on gloss-level supervision. Our approach notably exceeds the performance of state-of-the-art methods on the PHOENIX14T benchmark, while significantly reducing sequence length. Further experiments also demonstrate our improved performance over prior work under comparable sequence-lengths, validating the potential of our tokenization and alignment strategies.
JianHe Low、Ozge Mercanoglu Sincan、Richard Bowden
计算技术、计算机技术
JianHe Low,Ozge Mercanoglu Sincan,Richard Bowden.SAGE: Segment-Aware Gloss-Free Encoding for Token-Efficient Sign Language Translation[EB/OL].(2025-07-12)[2025-08-02].https://arxiv.org/abs/2507.09266.点此复制
评论