Two Intermediate Translations Are Better Than One: Fine-tuning LLMs for Document-level Translation Refinement
Two Intermediate Translations Are Better Than One: Fine-tuning LLMs for Document-level Translation Refinement
Recent research has shown that large language models (LLMs) can enhance translation quality through self-refinement. In this paper, we build on this idea by extending the refinement from sentence-level to document-level translation, specifically focusing on document-to-document (Doc2Doc) translation refinement. Since sentence-to-sentence (Sent2Sent) and Doc2Doc translation address different aspects of the translation process, we propose fine-tuning LLMs for translation refinement using two intermediate translations, combining the strengths of both Sent2Sent and Doc2Doc. Additionally, recognizing that the quality of intermediate translations varies, we introduce an enhanced fine-tuning method with quality awareness that assigns lower weights to easier translations and higher weights to more difficult ones, enabling the model to focus on challenging translation cases. Experimental results across ten translation tasks with LLaMA-3-8B-Instruct and Mistral-Nemo-Instruct demonstrate the effectiveness of our approach.
Yichen Dong、Xinglin Lyu、Junhui Li、Daimeng Wei、Min Zhang、Shimin Tao、Hao Yang
语言学
Yichen Dong,Xinglin Lyu,Junhui Li,Daimeng Wei,Min Zhang,Shimin Tao,Hao Yang.Two Intermediate Translations Are Better Than One: Fine-tuning LLMs for Document-level Translation Refinement[EB/OL].(2025-04-07)[2025-04-26].https://arxiv.org/abs/2504.05614.点此复制
评论