Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning (2405.20535v2)

Published 30 May 2024 in cs.AI and cs.CL

Abstract: Instruction Fine-Tuning (IFT) significantly enhances the zero-shot capabilities of pretrained LLMs. While coding data is known to boost LLM reasoning abilities during pretraining, its role in activating internal reasoning capacities during IFT remains understudied. This paper investigates a key question: How does coding data impact LLMs' reasoning capacities during IFT stage? To explore this, we thoroughly examine the impact of coding data across different coding data proportions, model families, sizes, and reasoning domains, from various perspectives. Specifically, we create three IFT datasets with increasing coding data proportions, fine-tune six LLM backbones across different families and scales on these datasets, evaluate the tuned models' performance across twelve tasks in three reasoning domains, and analyze the outcomes from three broad-to-granular perspectives: overall, domain-level, and task-specific. Our holistic analysis provides valuable insights into each perspective. First, coding data tuning enhances the overall reasoning capabilities of LLMs across different model families and scales. Moreover, while the impact of coding data varies by domain, it shows consistent trends within each domain across different model families and scales. Additionally, coding data generally provides comparable task-specific benefits across model families, with optimal proportions in IFT datasets being task-dependent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xinlu Zhang (15 papers)
  2. Zhiyu Zoey Chen (9 papers)
  3. Xi Ye (33 papers)
  4. Xianjun Yang (37 papers)
  5. Lichang Chen (30 papers)
  6. William Yang Wang (254 papers)
  7. Linda Ruth Petzold (5 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.