|国家预印本平台
首页|Instruction-tuned Large Language Models for Machine Translation in the Medical Domain

Instruction-tuned Large Language Models for Machine Translation in the Medical Domain

Instruction-tuned Large Language Models for Machine Translation in the Medical Domain

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have shown promising results on machine translation for high resource language pairs and domains. However, in specialised domains (e.g. medical) LLMs have shown lower performance compared to standard neural machine translation models. The consistency in the machine translation of terminology is crucial for users, researchers, and translators in specialised domains. In this study, we compare the performance between baseline LLMs and instruction-tuned LLMs in the medical domain. In addition, we introduce terminology from specialised medical dictionaries into the instruction formatted datasets for fine-tuning LLMs. The instruction-tuned LLMs significantly outperform the baseline models with automatic metrics.

Miguel Rios

医学现状、医学发展医学研究方法

Miguel Rios.Instruction-tuned Large Language Models for Machine Translation in the Medical Domain[EB/OL].(2025-07-30)[2025-08-06].https://arxiv.org/abs/2408.16440.点此复制

评论