|国家预印本平台
首页|TULUN: Transparent and Adaptable Low-resource Machine Translation

TULUN: Transparent and Adaptable Low-resource Machine Translation

TULUN: Transparent and Adaptable Low-resource Machine Translation

来源:Arxiv_logoArxiv
英文摘要

Machine translation (MT) systems that support low-resource languages often struggle on specialized domains. While researchers have proposed various techniques for domain adaptation, these approaches typically require model fine-tuning, making them impractical for non-technical users and small organizations. To address this gap, we propose Tulun, a versatile solution for terminology-aware translation, combining neural MT with large language model (LLM)-based post-editing guided by existing glossaries and translation memories. Our open-source web-based platform enables users to easily create, edit, and leverage terminology resources, fostering a collaborative human-machine translation process that respects and incorporates domain expertise while increasing MT accuracy. Evaluations show effectiveness in both real-world and benchmark scenarios: on medical and disaster relief translation tasks for Tetun and Bislama, our system achieves improvements of 16.90-22.41 ChrF++ points over baseline MT systems. Across six low-resource languages on the FLORES dataset, Tulun outperforms both standalone MT and LLM approaches, achieving an average improvement of 2.8 ChrF points over NLLB-54B.

Rapha?l Merx、Hanna Suominen、Lois Hong、Nick Thieberger、Trevor Cohn、Ekaterina Vylomova

南岛语系(马来亚-玻里尼西亚语系)

Rapha?l Merx,Hanna Suominen,Lois Hong,Nick Thieberger,Trevor Cohn,Ekaterina Vylomova.TULUN: Transparent and Adaptable Low-resource Machine Translation[EB/OL].(2025-05-24)[2025-06-05].https://arxiv.org/abs/2505.18683.点此复制

评论