Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NoLiMa: Long-Context Evaluation Beyond Literal Matching (2502.05167v2)

Published 7 Feb 2025 in cs.CL

Abstract: Recent LLMs support long contexts ranging from 128K to 1M tokens. A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context). Extensions of this approach include increasing distractors, fact chaining, and in-context reasoning. However, in these benchmarks, models can exploit existing literal matches between the needle and haystack to simplify the task. To address this, we introduce NoLiMa, a benchmark extending NIAH with a carefully designed needle set, where questions and needles have minimal lexical overlap, requiring models to infer latent associations to locate the needle within the haystack. We evaluate 12 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (<1K), performance degrades significantly as context length increases. At 32K, for instance, 10 models drop below 50% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the increased difficulty the attention mechanism faces in longer contexts when literal matches are absent, making it harder to retrieve relevant information. We publicly release the dataset and evaluation code at https://github.com/adobe-research/NoLiMa.

NoLiMa: Long-Context Evaluation Beyond Literal Matching

The paper presents NoLiMa, a sophisticated benchmark designed to evaluate the long-context capabilities of LLMs beyond surface-level matching. With the increasing ability of LLMs to handle extended contexts of up to 1 million tokens, typical evaluation methods such as the Needle-in-a-Haystack (NIAH) test often rely on finding literal matches between a “needle” fact and a “haystack” of irrelevant information. The authors argue that this approach exploits inherent limitations, as LLMs excel at detecting and using such surface-level matches for retrieval, potentially oversimplifying the task.

NoLiMa mitigates this by presenting a benchmark that minimizes lexical overlap between queries and the relevant context. By requiring models to leverage latent associations for finding the needle, NoLiMa offers a more stringent and insightful examination of associative reasoning within long contexts.

Quantitative Insights and Model Evaluations

The paper evaluates 12 prominent LLMs that claim to support long-context capabilities, including models like GPT-4o and Llama 3.3 70B. Initial performance is strong in short contexts, with several models achieving high base scores when tested on short-length benchmarks. However, performance degradation is notable as context lengths increase. Notably, at a context length of 32K tokens, 10 out of the 12 models perform at less than 50% of their short-context baselines, with even top performers like GPT-4o dropping to 69.7% from an initial 99.3%. This reveals the significant limitations these models face when literal matches are absent, requiring them to infer and connect disjointed information via their attention mechanisms.

Theoretical and Empirical Observations

The research further uncovers several critical factors influencing model performance, such as latent association "hops" and factual ordering. Notably, models exhibit pronounced difficulty when tasks involve multiple associative reasoning steps and the ordering of elements obstructs straightforward retrieval. This underscores the complex nature of long-context reasoning where models falter under increased association depth, especially within large contexts.

Additionally, the authors observe how positional encoding and the inherent capability of the attention mechanism play roles in long-context comprehension, suggesting that without surface-level cues, LLMs struggle to manage the extensive volume of tokens effectively.

Advanced Prompting Techniques and Implications

The paper also explores the potential of Chain-of-Thought prompting and reasoning-based model enhancements to mitigate long-context challenges. While Chain-of-Thought prompting improves performance to some extent, particularly in associative tasks demanding step-by-step reasoning, these improvements are not substantial enough to overcome the limitations posed by extremely long contexts.

Implications for Future AI Development

NoLiMa highlights critical limitations in contemporary LLMs' capability to handle long-context tasks without resorting to literal matches. The paper suggests a reevaluation of benchmark designs to better reflect scenarios that require true understanding and inference. The findings have significant implications for future AI systems, particularly in applications like search engines and Retrieval-Augmented Generation (RAG) systems, where the capacity to infer from disjointed content rather than relying on surface textual cues will be invaluable.

The research prompts a need for developing more robust attention mechanisms and associative reasoning strategies that can efficiently handle extensive context lengths without performance compromise. By emphasizing latent reasoning capabilities over surface-level matching, NoLiMa sets a foundational direction for advancing LLMs toward more complex and realistic NLP tasks, ensuring their effective application in scenarios demanding nuanced and contextually rich understanding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ali Modarressi (16 papers)
  2. Hanieh Deilamsalehy (19 papers)
  3. Franck Dernoncourt (161 papers)
  4. Trung Bui (79 papers)
  5. Ryan A. Rossi (124 papers)
  6. Seunghyun Yoon (64 papers)
  7. Hinrich Schütze (250 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com