Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling LLMs' Decomposition Abilities into Compact Language Models (2402.01812v1)

Published 2 Feb 2024 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have demonstrated proficiency in their reasoning abilities, yet their large size presents scalability challenges and limits any further customization. In contrast, compact models offer customized training but often fall short in solving complex reasoning tasks. This study focuses on distilling the LLMs' decomposition skills into compact models using offline reinforcement learning. We leverage the advancements in the LLM`s capabilities to provide feedback and generate a specialized task-specific dataset for training compact models. The development of an AI-generated dataset and the establishment of baselines constitute the primary contributions of our work, underscoring the potential of compact models in replicating complex problem-solving skills.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Denis Tarasov (15 papers)
  2. Kumar Shridhar (25 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com