|国家预印本平台
首页|Enhancing LLM Language Adaption through Cross-lingual In-Context Pre-training

Enhancing LLM Language Adaption through Cross-lingual In-Context Pre-training

Enhancing LLM Language Adaption through Cross-lingual In-Context Pre-training

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) exhibit remarkable multilingual capabilities despite English-dominated pre-training, attributed to cross-lingual mechanisms during pre-training. Existing methods for enhancing cross-lingual transfer remain constrained by parallel resources, suffering from limited linguistic and domain coverage. We propose Cross-lingual In-context Pre-training (CrossIC-PT), a simple and scalable approach that enhances cross-lingual transfer by leveraging semantically related bilingual texts via simple next-word prediction. We construct CrossIC-PT samples by interleaving semantic-related bilingual Wikipedia documents into a single context window. To access window size constraints, we implement a systematic segmentation policy to split long bilingual document pairs into chunks while adjusting the sliding window mechanism to preserve contextual coherence. We further extend data availability through a semantic retrieval framework to construct CrossIC-PT samples from web-crawled corpus. Experimental results demonstrate that CrossIC-PT improves multilingual performance on three models (Llama-3.1-8B, Qwen2.5-7B, and Qwen2.5-1.5B) across six target languages, yielding performance gains of 3.79%, 3.99%, and 1.95%, respectively, with additional improvements after data augmentation.

Linjuan Wu、Haoran Wei、Huan Lin、Tianhao Li、Baosong Yang、Weiming Lu

计算技术、计算机技术

Linjuan Wu,Haoran Wei,Huan Lin,Tianhao Li,Baosong Yang,Weiming Lu.Enhancing LLM Language Adaption through Cross-lingual In-Context Pre-training[EB/OL].(2025-04-29)[2025-05-26].https://arxiv.org/abs/2504.20484.点此复制

评论