|国家预印本平台
首页|Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations

Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations

Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations

来源:Arxiv_logoArxiv
英文摘要

The language ability of Large Language Models (LLMs) is often unbalanced towards English because of the imbalance in the distribution of the pre-training data. This disparity is demanded in further fine-tuning and affecting the cross-lingual abilities of LLMs. In this paper, we propose to empower Instructiontuned LLMs (It-LLMs) in languages other than English by building semantic alignment between them. Hence, we propose CrossAlpaca, an It-LLM with cross-lingual instruction-following and Translation-following demonstrations to improve semantic alignment between languages. We validate our approach on the multilingual Question Answering (QA) benchmarks XQUAD and MLQA and adapted versions of MMLU and BBH. Our models, tested over six different languages, outperform the It-LLMs tuned on monolingual data. The final results show that instruction tuning on non-English data is not enough and that semantic alignment can be further improved by Translation-following demonstrations.

Giulia Pucci、Leonardo Ranaldi、Andre Freitas

10.18653/v1/2024.findings-acl.473

语言学常用外国语印欧语系

Giulia Pucci,Leonardo Ranaldi,Andre Freitas.Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrations[EB/OL].(2023-08-27)[2025-08-10].https://arxiv.org/abs/2308.14186.点此复制

评论