Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoning (2410.18574v1)

Published 24 Oct 2024 in cs.AI

Abstract: LLMs can transfer their reasoning skills to smaller models by teaching them to generate the intermediate reasoning process required to solve multistep reasoning tasks. While LLMs can accurately solve reasoning tasks through a variety of strategies, even without fine-tuning, smaller models are not expressive enough to fit the LLMs distribution on all strategies when distilled and tend to prioritize one strategy over the others. This reliance on one strategy poses a challenge for smaller models when attempting to solve reasoning tasks that may be difficult with their preferred strategy. To address this, we propose a distillation method SIKeD (Self-guided Iterative Knowledge Distillation for mathematical reasoning), where the LLM teaches the smaller model to approach a task using different strategies and the smaller model uses its self-generated on-policy outputs to choose the most suitable strategy for the given task. The training continues in a self-guided iterative manner, where for each training iteration, a decision is made on how to combine the LLM data with the self-generated outputs. Unlike traditional distillation methods, SIKeD allows the smaller model to learn which strategy is suitable for a given task while continuously learning to solve a task using different strategies. Our experiments on various mathematical reasoning datasets show that SIKeD significantly outperforms traditional distillation techniques across smaller models of different sizes. Our code is available at: https://github.com/kumar-shridhar/SIKeD

Overview of "SIKeD: Self-guided Iterative Knowledge Distillation for Mathematical Reasoning"

The paper "SIKeD: Self-guided Iterative Knowledge Distillation for Mathematical Reasoning" presents a novel approach to enhancing the mathematical reasoning capabilities of smaller models by distilling knowledge from LLMs. The methodology, SIKeD, aims to overcome the limitations faced by smaller models when trying to replicate the reasoning abilities of larger counterparts.

Key Contributions

The research introduces a distillation framework where an LLM imparts multiple reasoning strategies to a smaller model. However, unlike traditional techniques where the student model is often biased towards a singular approach, SIKeD encourages dynamic learning through iterative self-guided training. This results in a model that not only adopts diverse problem-solving methodologies but also selects the most effective strategy for a given task through self-generation and on-policy guidance.

Methodology

SIKeD integrates several key steps:

  1. Multi-Strategy Training: Initially, datasets consisting of various reasoning strategies like Chain of Thought, Program of Thoughts, and Least-to-Most are generated by the LLM. The smaller model is distilled with these multiple strategies, establishing a baseline.
  2. Self-Generated Data: The smaller model generates its own predictions, which are then filtered for accuracy. These correct outputs are incorporated into the training dataset.
  3. Data Mixing: Combining LLM data with self-generated data provides a balanced training distribution. This iterative process allows the smaller model to align with its learned capabilities while still influenced by the LLM-initiated strategies.
  4. Iterative Refinement: The iterative nature of SIKeD enables continuous refinement, encouraging the model to explore and consolidate different strategies.

Experimental Results

The proposed method was evaluated on several mathematical reasoning tasks, including GSM8K, SVAMP, ASDiv, and MultiArith datasets. Across these tasks, SIKeD consistently demonstrated improved performance over traditional single-strategy distillation methods. Notably, models distilled using SIKeD showed significant gains, with improvement metrics reaching up to +5 points in certain cases.

Implications and Future Directions

The introduction of SIKeD has several important implications:

  • Scalability: By enabling smaller models to approximate the reasoning capabilities of larger models, SIKeD promotes more resource-efficient model training and deployment.
  • Strategy Selection: The ability to choose the optimal reasoning strategy dynamically enhances the versatility of smaller models in tackling diverse mathematical tasks.
  • Future Research: This work opens avenues for further research into adaptive distillation methods, potentially exploring more complex domains beyond mathematical reasoning.

In conclusion, SIKeD makes significant strides in bridging the gap between large-scale reasoning capabilities and the practical constraints of smaller models. It sets the stage for future innovations in the distillation of complex reasoning skills, moving towards models that are both efficient and effective in various real-world contexts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Gpt-4 technical report. arXiv preprint, 2023. URL https://arxiv.org/abs/2303.08774.
  2. On-policy distillation of language models: Learning from self-generated mistakes. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=3zKtaqxLhW.
  3. Qwen technical report. arXiv preprint, 2023. URL https://arxiv.org/abs/2309.16609.
  4. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541, Philadelphia PA USA, August 2006. ACM. ISBN 978-1-59593-339-3. 10.1145/1150402.1150464. URL https://dl.acm.org/doi/10.1145/1150402.1150464.
  5. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=YfZ4ZPt8zd.
  6. Mixed distillation helps smaller language model better reasoning. arXiv preprint, 2023. URL https://arxiv.org/abs/2312.10730.
  7. Training verifiers to solve math word problems. arXiv preprint, 2021. URL https://arxiv.org/abs/2110.14168.
  8. The llama 3 herd of models. arXiv preprint, 2024. URL https://arxiv.org/abs/2407.21783.
  9. Reinforced self-training (rest) for language modeling. arXiv preprint, 2023. URL https://arxiv.org/abs/2308.08998.
  10. A new look to a classic issue: Reasoning and academic achievement at secondary school. Frontiers in Psychology, 9, 2018. ISSN 1664-1078. 10.3389/fpsyg.2018.00400. URL https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2018.00400.
  11. Self-knowledge distillation in natural language processing. In Ruslan Mitkov and Galia Angelova, editors, Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 423–430, September 2019. URL https://aclanthology.org/R19-1050.
  12. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJgdnAVKDH.
  13. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531, 2015. URL https://arxiv.org/abs/1503.02531.
  14. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017. Association for Computational Linguistics, July 2023. URL https://aclanthology.org/2023.findings-acl.507.
  15. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
  16. Hugging Face. smol-llm: Train a small llm from scratch. https://huggingface.co/blog/smollm, 2023. Accessed: 2024-09-23.
  17. First-step advantage: Importance of starting right in multi-step math reasoning. ArXiv, abs/2311.07945, 2023. URL https://arxiv.org/abs/2311.079455.
  18. Sequence-level knowledge distillation. In Jian Su, Kevin Duh, and Xavier Carreras, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas, November 2016. Association for Computational Linguistics. 10.18653/v1/D16-1139. URL https://aclanthology.org/D16-1139.
  19. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.
  20. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. URL https://dl.acm.org/doi/10.1145/3600006.3613165.
  21. Smart: Self-learning meta-strategy agent for reasoning tasks. arXiv preprint, 2024. URL https://arxiv.org/abs/2410.16128.
  22. Calibrating large language models with sample consistency. arXiv preprint, 2024. URL https://arxiv.org/abs/2402.13904.
  23. Teaching small language models to reason. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781. Association for Computational Linguistics, July 2023. URL https://aclanthology.org/2023.acl-short.151.
  24. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984. Association for Computational Linguistics, July 2020. 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92.
  25. Constructivism—constructivist learning theory. IAP Information Age Publishing, 2013.
  26. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094. Association for Computational Linguistics, June 2021. URL https://aclanthology.org/2021.naacl-main.168.
  27. Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural computation, 3(1):88–97, 1991.
  28. Efficient reductions for imitation learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 661–668. PMLR, 13–15 May 2010. URL https://proceedings.mlr.press/v9/ross10a.html.
  29. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752. Association for Computational Linguistics, September 2015. URL https://aclanthology.org/D15-1202.
  30. Automatic generation of socratic subquestions for teaching math word problems. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149. Association for Computational Linguistics, December 2022. URL https://aclanthology.org/2022.emnlp-main.277.
  31. Distilling reasoning capabilities into smaller language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 7059–7073, Toronto, Canada, July 2023. Association for Computational Linguistics. 10.18653/v1/2023.findings-acl.441. URL https://aclanthology.org/2023.findings-acl.441.
  32. Self-training for unsupervised neural machine translation in unbalanced training data scenarios. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3975–3981. Association for Computational Linguistics, June 2021. URL https://aclanthology.org/2021.naacl-main.311.
  33. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. URL https://arxiv.org/abs/2403.08295.
  34. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. URL https://arxiv.org/abs/2302.13971.
  35. Unslothai. Unsloth. https://github.com/unslothai/unsloth, 2023. URL https://github.com/unslothai/unsloth. GitHub repository.
  36. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=1PL1NIMMrw.
  37. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=_VjQlMeSB_J.
  38. Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation, 2020. URL https://arxiv.org/abs/2002.10345.
  39. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=WZH7099tgfM.
  40. Distilling mathematical reasoning capabilities into small language models. arXiv preprint arXiv:2401.11864, 2024. URL https://arxiv.org/abs/2401.11864.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shivam Adarsh (2 papers)
  2. Kumar Shridhar (25 papers)
  3. Caglar Gulcehre (71 papers)
  4. Nicholas Monath (29 papers)
  5. Mrinmaya Sachan (124 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com