Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lost in the Middle: How Language Models Use Long Contexts (2307.03172v3)

Published 6 Jul 2023 in cs.CL

Abstract: While recent LLMs have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of LLMs on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current LLMs do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how LLMs use their input context and provides new evaluation protocols for future long-context LLMs.

Overview

LLMs (LMs) have become a cornerstone of various applications within the field of Artificial Intelligence. With the advent of models that can parse and generate natural language, the scope of applications has expanded tremendously. However, one critical aspect that remains under-explored is how these models leverage long input contexts, especially given their inherent ability to process thousands of tokens simultaneously. The paper conducted by Liu et al. sheds light on this particular aspect, providing insights that could influence future developments in the field.

Understanding Model Performance Across Contexts

The paper meticulously analyzes the performance of different state-of-the-art LLMs across two main tasks: multi-document question answering and key-value retrieval. The key takeaway is rather enlightening yet concerning: the performance of LMs suffers significantly when the relevant information is nestled in the middle of the input context. This finding is consistent across various models, including those explicitly designed to handle long contexts.

The analysis reveals a distinctive U-shaped curve in model performance, where models perform optimally if the relevant information is placed at the beginning or the end of the input context. This fundamental revelation underpins a primacy and recency bias within these models, highlighting a significant gap in their ability to uniformly process information throughout the input context.

Delving Deeper Into Model Capabilities

Further investigations into factors such as model architecture (encoder-decoder vs. decoder-only), query-aware contextualization, and instruction fine-tuning reveal nuanced insights. Encoder-decoder models, for instance, demonstrate a relative robustness to the position of the relevant information but only within sequence lengths encountered during their training regime. This robustness dissipates with longer sequences, reinstating the observed U-shaped performance curve.

Query-aware contextualization showed promise, particularly in key-value retrieval tasks, indicating that how information is presented to the model (such as encapsulating the query within the context) can drastically enhance performance. Interestingly, instruction fine-tuning exhibited a minimal influence on mitigating the observed biases, suggesting that the root causes might be more deeply ingrained within the models' architecture or training methodology.

Practical Implications and Future Directions

The empirical findings of this paper bear significant implications for the application of LMs in real-world scenarios. For instance, in open-domain question answering, the paper reveals a perplexing observation: the performance of LLM-based readers saturates far before the recall capacity of the retriever models. This indicates a fundamental inefficiency in how these models utilize additional context, challenging the assumption that more context invariably equates to better performance.

Concluding Thoughts

The paper conducted by Liu et al. provides critical insights into the utilization of long contexts by LLMs, highlighting substantial biases and inefficiencies. These findings not only underscore the limitations of current models in processing uniformly across lengthy inputs but also chart a path for future research aimed at addressing these challenges. As we move forward, understanding and improving how LMs leverage their input context will be paramount in unlocking their full potential across a myriad of applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nelson F. Liu (19 papers)
  2. Kevin Lin (98 papers)
  3. John Hewitt (24 papers)
  4. Ashwin Paranjape (12 papers)
  5. Michele Bevilacqua (4 papers)
  6. Fabio Petroni (37 papers)
  7. Percy Liang (239 papers)
Citations (1,001)
Youtube Logo Streamline Icon: https://streamlinehq.com