|国家预印本平台
首页|Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?

Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?

Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?

来源:Arxiv_logoArxiv
英文摘要

A frequently observed problem with LLMs is their tendency to generate output that is nonsensical, illogical, or factually incorrect, often referred to broadly as hallucination. Building on the recently proposed HalluciGen task for hallucination detection and generation, we evaluate a suite of open-access LLMs on their ability to detect intrinsic hallucinations in two conditional generation tasks: translation and paraphrasing. We study how model performance varies across tasks and language and we investigate the impact of model size, instruction tuning, and prompt choice. We find that performance varies across models but is consistent across prompts. Finally, we find that NLI models perform comparably well, suggesting that LLM-based detectors are not the only viable option for this specific task.

Evangelia Gogoulou、Shorouq Zahra、Liane Guillou、Luise Dürlich、Joakim Nivre

计算技术、计算机技术

Evangelia Gogoulou,Shorouq Zahra,Liane Guillou,Luise Dürlich,Joakim Nivre.Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?[EB/OL].(2025-04-29)[2025-05-24].https://arxiv.org/abs/2504.20699.点此复制

评论