|国家预印本平台
首页|TAT-R1: Terminology-Aware Translation with Reinforcement Learning and Word Alignment

TAT-R1: Terminology-Aware Translation with Reinforcement Learning and Word Alignment

TAT-R1: Terminology-Aware Translation with Reinforcement Learning and Word Alignment

来源:Arxiv_logoArxiv
英文摘要

Recently, deep reasoning large language models(LLMs) like DeepSeek-R1 have made significant progress in tasks such as mathematics and coding. Inspired by this, several studies have employed reinforcement learning(RL) to enhance models' deep reasoning capabilities and improve machine translation(MT) quality. However, the terminology translation, an essential task in MT, remains unexplored in deep reasoning LLMs. In this paper, we propose \textbf{TAT-R1}, a terminology-aware translation model trained with reinforcement learning and word alignment. Specifically, we first extract the keyword translation pairs using a word alignment model. Then we carefully design three types of rule-based alignment rewards with the extracted alignment relationships. With those alignment rewards, the RL-trained translation model can learn to focus on the accurate translation of key information, including terminology in the source text. Experimental results show the effectiveness of TAT-R1. Our model significantly improves terminology translation accuracy compared to the baseline models while maintaining comparable performance on general translation tasks. In addition, we conduct detailed ablation studies of the DeepSeek-R1-like training paradigm for machine translation and reveal several key findings.

Zheng Li、Mao Zheng、Mingyang Song、Wenjie Yang

语言学

Zheng Li,Mao Zheng,Mingyang Song,Wenjie Yang.TAT-R1: Terminology-Aware Translation with Reinforcement Learning and Word Alignment[EB/OL].(2025-05-27)[2025-06-07].https://arxiv.org/abs/2505.21172.点此复制

评论