Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CEM: A Data-Efficient Method for Large Language Models to Continue Evolving From Mistakes (2404.08707v7)

Published 11 Apr 2024 in cs.LG, cs.AI, and cs.CL

Abstract: As world knowledge advances and new task schemas emerge, Continual Learning (CL) becomes essential for keeping LLMs current and addressing their shortcomings. This process typically involves continual instruction tuning (CIT) and continual pre-training (CPT) to enable these models to adapt to novel tasks and acquire critical knowledge. However, collecting sufficient CPT data and efficiently bridging knowledge gaps remain significant challenges. Inspired by the 'summarizing mistakes' strategy, we propose the Continue Evolving from Mistakes (CEM) method, a data-efficient approach aiming to collect CPT data and continually improve LLMs' performance through iterative evaluation and supplementation with mistake-relevant knowledge. To further optimize data usage and mitigate forgetting, we introduce a novel training paradigm that combines CIT and CPT. Experiments show that CEM substantially enhances multiple models' performance on both in-domain and out-of-domain QA tasks, achieving gains of up to 29.63%. Code and datasets are available on https://anonymous.4open.science/r/cem-BB25.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haokun Zhao (4 papers)
  2. Haixia Han (4 papers)
  3. Jie Shi (32 papers)
  4. Chengyu Du (15 papers)
  5. Jiaqing Liang (62 papers)
  6. Yanghua Xiao (151 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets