Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mind's Mirror: Distilling Self-Evaluation Capability and Comprehensive Thinking from Large Language Models (2311.09214v3)

Published 15 Nov 2023 in cs.CL

Abstract: LLMs have achieved remarkable advancements in natural language processing. However, the massive scale and computational demands of these models present formidable challenges when considering their practical deployment in resource-constrained environments. While techniques such as chain-of-thought (CoT) distillation have displayed promise in distilling LLMs into small LLMs (SLMs), there is a risk that distilled SLMs may still inherit flawed reasoning and hallucinations from LLMs. To address these issues, we propose a twofold methodology: First, we introduce a novel method for distilling the self-evaluation capability from LLMs into SLMs, aiming to mitigate the adverse effects of flawed reasoning and hallucinations inherited from LLMs. Second, we advocate for distilling more comprehensive thinking by incorporating multiple distinct CoTs and self-evaluation outputs, to ensure a more thorough and robust knowledge transfer into SLMs. Experiments on three NLP benchmarks demonstrate that our method significantly improves the performance of distilled SLMs, offering a new perspective for developing more effective and efficient SLMs in resource-constrained environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Weize Liu (5 papers)
  2. Guocong Li (2 papers)
  3. Kai Zhang (542 papers)
  4. Bang Du (11 papers)
  5. Qiyuan Chen (22 papers)
  6. Xuming Hu (120 papers)
  7. Hongxia Xu (24 papers)
  8. Jintai Chen (57 papers)
  9. Jian Wu (314 papers)
Citations (3)