|国家预印本平台
首页|NoLiMa: Long-Context Evaluation Beyond Literal Matching

NoLiMa: Long-Context Evaluation Beyond Literal Matching

NoLiMa: Long-Context Evaluation Beyond Literal Matching

来源:Arxiv_logoArxiv
英文摘要

Recent large language models (LLMs) support long contexts ranging from 128K to 1M tokens. A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context). Extensions of this approach include increasing distractors, fact chaining, and in-context reasoning. However, in these benchmarks, models can exploit existing literal matches between the needle and haystack to simplify the task. To address this, we introduce NoLiMa, a benchmark extending NIAH with a carefully designed needle set, where questions and needles have minimal lexical overlap, requiring models to infer latent associations to locate the needle within the haystack. We evaluate 13 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (<1K), performance degrades significantly as context length increases. At 32K, for instance, 11 models drop below 50% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the increased difficulty the attention mechanism faces in longer contexts when literal matches are absent, making it harder to retrieve relevant information. Even models enhanced with reasoning capabilities or CoT prompting struggle to maintain performance in long contexts. We publicly release the dataset and evaluation code at https://github.com/adobe-research/NoLiMa.

Ali Modarressi、Ryan A. Rossi、Hanieh Deilamsalehy、Franck Dernoncourt、Trung Bui、Seunghyun Yoon、Hinrich Schütze

计算技术、计算机技术

Ali Modarressi,Ryan A. Rossi,Hanieh Deilamsalehy,Franck Dernoncourt,Trung Bui,Seunghyun Yoon,Hinrich Schütze.NoLiMa: Long-Context Evaluation Beyond Literal Matching[EB/OL].(2025-07-09)[2025-07-17].https://arxiv.org/abs/2502.05167.点此复制

评论