|国家预印本平台
首页|Evaluating Robustness of LLMs in Question Answering on Multilingual Noisy OCR Data

Evaluating Robustness of LLMs in Question Answering on Multilingual Noisy OCR Data

Evaluating Robustness of LLMs in Question Answering on Multilingual Noisy OCR Data

来源:Arxiv_logoArxiv
英文摘要

Optical Character Recognition (OCR) plays a crucial role in digitizing historical and multilingual documents, yet OCR errors - imperfect extraction of text, including character insertion, deletion, and substitution can significantly impact downstream tasks like question-answering (QA). In this work, we conduct a comprehensive analysis of how OCR-induced noise affects the performance of Multilingual QA Systems. To support this analysis, we introduce a multilingual QA dataset MultiOCR-QA, comprising 50K question-answer pairs across three languages, English, French, and German. The dataset is curated from OCR-ed historical documents, which include different levels and types of OCR noise. We then evaluate how different state-of-the-art Large Language models (LLMs) perform under different error conditions, focusing on three major OCR error types. Our findings show that QA systems are highly prone to OCR-induced errors and perform poorly on noisy OCR text. By comparing model performance on clean versus noisy texts, we provide insights into the limitations of current approaches and emphasize the need for more noise-resilient QA systems in historical digitization contexts.

Adam Jatowt、Jamshid Mozafari、Antoine Doucet、Bhawna Piryani、Abdelrahman Abdallah

印欧语系计算技术、计算机技术

Adam Jatowt,Jamshid Mozafari,Antoine Doucet,Bhawna Piryani,Abdelrahman Abdallah.Evaluating Robustness of LLMs in Question Answering on Multilingual Noisy OCR Data[EB/OL].(2025-08-06)[2025-08-24].https://arxiv.org/abs/2502.16781.点此复制

评论