AskQE: Question Answering as Automatic Evaluation for Machine Translation
AskQE: Question Answering as Automatic Evaluation for Machine Translation
How can a monolingual English speaker determine whether an automatic translation in French is good enough to be shared? Existing MT error detection and quality estimation (QE) techniques do not address this practical scenario. We introduce AskQE, a question generation and answering framework designed to detect critical MT errors and provide actionable feedback, helping users decide whether to accept or reject MT outputs even without the knowledge of the target language. Using ContraTICO, a dataset of contrastive synthetic MT errors in the COVID-19 domain, we explore design choices for AskQE and develop an optimized version relying on LLaMA-3 70B and entailed facts to guide question generation. We evaluate the resulting system on the BioMQM dataset of naturally occurring MT errors, where AskQE has higher Kendall's Tau correlation and decision accuracy with human ratings compared to other QE metrics.
Dayeon Ki、Kevin Duh、Marine Carpuat
常用外国语
Dayeon Ki,Kevin Duh,Marine Carpuat.AskQE: Question Answering as Automatic Evaluation for Machine Translation[EB/OL].(2025-04-15)[2025-05-05].https://arxiv.org/abs/2504.11582.点此复制
评论