Improving Language and Modality Transfer in Translation by Character-level Modeling
Improving Language and Modality Transfer in Translation by Character-level Modeling
Current translation systems, despite being highly multilingual, cover only 5% of the world's languages. Expanding language coverage to the long-tail of low-resource languages requires data-efficient methods that rely on cross-lingual and cross-modal knowledge transfer. To this end, we propose a character-based approach to improve adaptability to new languages and modalities. Our method leverages SONAR, a multilingual fixed-size embedding space with different modules for encoding and decoding. We use a teacher-student approach with parallel translation data to obtain a character-level encoder. Then, using ASR data, we train a lightweight adapter to connect a massively multilingual CTC ASR model (MMS), to the character-level encoder, potentially enabling speech translation from 1,000+ languages. Experimental results in text translation for 75 languages on FLORES+ demonstrate that our character-based approach can achieve better language transfer than traditional subword-based models, especially outperforming them in low-resource settings, and demonstrating better zero-shot generalizability to unseen languages. Our speech adaptation, maximizing knowledge transfer from the text modality, achieves state-of-the-art results in speech-to-text translation on the FLEURS benchmark on 33 languages, surpassing previous supervised and cascade models, albeit being a zero-shot model with minimal supervision from ASR data.
Ioannis Tsiamas、David Dale、Marta R. Costa-jussà
语言学
Ioannis Tsiamas,David Dale,Marta R. Costa-jussà.Improving Language and Modality Transfer in Translation by Character-level Modeling[EB/OL].(2025-05-30)[2025-06-23].https://arxiv.org/abs/2505.24561.点此复制
评论