Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixed Distillation Helps Smaller Language Model Better Reasoning (2312.10730v2)

Published 17 Dec 2023 in cs.CL and cs.AI

Abstract: While LLMs have demonstrated exceptional performance in recent NLP tasks, their deployment poses substantial challenges due to high computational and memory demands in real-world applications. Recent studies have focused on enhancing smaller models through knowledge distillation from LLMs, yielding promising results. However, these models often struggle to match the performance of LLMs, especially in tasks that require reasoning. In this work, we introduce Mixed Distillation (MD) framework, which capitalizes on the strengths of Program of Thought (PoT) and Chain of Thought (CoT) capabilities within LLMs, combining multiple prompting techniques and distilling these capabilities into smaller models. Our experimental results show that MD significantly enhances the single-path and multi-path reasoning ability of smaller models in various tasks. In terms of accuracy and generality of reasoning tasks, the model generated by it exceeds the comprehensive performance of two individually distilled models. Notably, LLaMA2-7B and CodeLlama-7B using MD achieved remarkable improvements of (84.5%) and (85.5%), respectively, outperforming GPT-3.5-Turbo by (2.5%) and (3.5%), on the SVAMP benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chenglin Li (42 papers)
  2. Qianglong Chen (25 papers)
  3. Liangyue Li (15 papers)
  4. Caiyu Wang (3 papers)
  5. Yicheng Li (38 papers)
  6. Zulong Chen (19 papers)
  7. Yin Zhang (98 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com