Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training With "Paraphrasing the Original Text" Teaches LLM to Better Retrieve in Long-context Tasks (2312.11193v10)

Published 18 Dec 2023 in cs.CL and cs.AI

Abstract: As LLMs continue to evolve, more are being designed to handle long-context inputs. Despite this advancement, most of them still face challenges in accurately handling long-context tasks, often showing the "lost in the middle" issue. We identify that insufficient retrieval capability is one of the important reasons for this issue. To tackle this challenge, we propose a novel approach to design training data for long-context tasks, aiming at augmenting LLMs' proficiency in extracting key information from long context. Specially, we incorporate an additional part named "paraphrasing the original text" when constructing the answer of training samples and then fine-tuning the model. Experimenting on LongBench and NaturalQuestions Multi-document-QA dataset with models of Llama and Qwen series, our method achieves an improvement of up to 8.48% and 4.48% in average scores, respectively, showing effectiveness in improving the model's performance on long-context tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yijiong Yu (11 papers)
  2. Yongfeng Huang (110 papers)
  3. Zhixiao Qi (3 papers)
  4. Zhe Zhou (33 papers)
Citations (4)