Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SELF: Self-Evolution with Language Feedback (2310.00533v4)

Published 1 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have demonstrated remarkable versatility across various domains. To further advance LLMs, we propose 'SELF' (Self-Evolution with Language Feedback), a novel approach that enables LLMs to self-improve through self-reflection, akin to human learning processes. SELF initiates with a meta-skill learning process that equips the LLMs with capabilities for self-feedback and self-refinement. Subsequently, the model undergoes an iterative process of self-evolution. In each iteration, it utilizes an unlabeled dataset of instructions to generate initial responses. These responses are enhanced through self-feedback and self-refinement. The model is then fine-tuned using this enhanced data. The model undergoes progressive improvement through this iterative self-evolution process. Moreover, the SELF framework enables the model to apply self-refinement during inference, which further improves response quality. Our experiments in mathematics and general tasks demonstrate that SELF can enhance the capabilities of LLMs without human intervention. The SELF framework indicates a promising direction for the autonomous evolution of LLMs, transitioning them from passive information receivers to active participants in their development.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jianqiao Lu (20 papers)
  2. Wanjun Zhong (49 papers)
  3. Wenyong Huang (12 papers)
  4. Yufei Wang (141 papers)
  5. Fei Mi (56 papers)
  6. Baojun Wang (14 papers)
  7. Weichao Wang (15 papers)
  8. Lifeng Shang (90 papers)
  9. Qun Liu (230 papers)
  10. Qi Zhu (160 papers)
  11. Xingshan Zeng (38 papers)
  12. Xin Jiang (242 papers)
Citations (6)