Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QCRD: Quality-guided Contrastive Rationale Distillation for Large Language Models (2405.13014v2)

Published 14 May 2024 in cs.CL and cs.AI

Abstract: The deployment of LLMs faces considerable challenges concerning resource constraints and inference efficiency. Recent research has increasingly focused on smaller, task-specific models enhanced by distilling knowledge from LLMs. However, prior studies have often overlooked the diversity and quality of knowledge, especially the untapped potential of negative knowledge. Constructing effective negative knowledge remains severely understudied. In this paper, we introduce a novel framework called quality-guided contrastive rationale distillation aimed at enhancing reasoning capabilities through contrastive knowledge learning. For positive knowledge, we enrich its diversity through temperature sampling and employ self-consistency for further denoising and refinement. For negative knowledge, we propose an innovative self-adversarial approach that generates low-quality rationales by sampling previous iterations of smaller LLMs, embracing the idea that one can learn from one's own weaknesses. A contrastive loss is developed to distill both positive and negative knowledge into smaller LLMs, where an online-updating discriminator is integrated to assess qualities of rationales and assign them appropriate weights, optimizing the training process. Through extensive experiments across multiple reasoning tasks, we demonstrate that our method consistently outperforms existing distillation techniques, yielding higher-quality rationales.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Wei Wang (1793 papers)
  2. Zhaowei Li (13 papers)
  3. Qi Xu (66 papers)
  4. Yiqing Cai (6 papers)
  5. Hang Song (18 papers)
  6. Qi Qi (66 papers)
  7. Ran Zhou (35 papers)
  8. Zhida Huang (6 papers)
  9. Tao Wang (700 papers)
  10. Li Xiao (85 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com