Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Never Lost in the Middle: Mastering Long-Context Question Answering with Position-Agnostic Decompositional Training (2311.09198v2)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: While LLMs are equipped with longer text input capabilities than before, they are struggling to seek correct information in long contexts. The "lost in the middle" problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Attention Strengthening Multi-doc QA (ASM QA). Following these tasks, our model excels in focusing more precisely on the desired information. Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7% absolute gain in shuffled settings, by 21.5% in passage retrieval task. We release our model, Ziya-Reader to promote related research in the community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Junqing He (8 papers)
  2. Kunhao Pan (6 papers)
  3. Xiaoqun Dong (2 papers)
  4. Zhuoyang Song (4 papers)
  5. Yibo Liu (34 papers)
  6. Yuxin Liang (7 papers)
  7. Hao Wang (1119 papers)
  8. Qianguo Sun (2 papers)
  9. Jiaxing Zhang (39 papers)
  10. Enming Zhang (14 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets