Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large Language Models (2406.01375v1)

Published 3 Jun 2024 in cs.CL

Abstract: Continual Pre-Training (CPT) on LLMs has been widely used to expand the model's fundamental understanding of specific downstream domains (e.g., math and code). For the CPT on domain-specific LLMs, one important question is how to choose the optimal mixture ratio between the general-corpus (e.g., Dolma, Slim-pajama) and the downstream domain-corpus. Existing methods usually adopt laborious human efforts by grid-searching on a set of mixture ratios, which require high GPU training consumption costs. Besides, we cannot guarantee the selected ratio is optimal for the specific domain. To address the limitations of existing methods, inspired by the Scaling Law for performance prediction, we propose to investigate the Scaling Law of the Domain-specific Continual Pre-Training (D-CPT Law) to decide the optimal mixture ratio with acceptable training costs for LLMs of different sizes. Specifically, by fitting the D-CPT Law, we can easily predict the general and downstream performance of arbitrary mixture ratios, model sizes, and dataset sizes using small-scale training costs on limited experiments. Moreover, we also extend our standard D-CPT Law on cross-domain settings and propose the Cross-Domain D-CPT Law to predict the D-CPT law of target domains, where very small training costs (about 1% of the normal training costs) are needed for the target domains. Comprehensive experimental results on six downstream domains demonstrate the effectiveness and generalizability of our proposed D-CPT Law and Cross-Domain D-CPT Law.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Haoran Que (10 papers)
  2. Jiaheng Liu (100 papers)
  3. Ge Zhang (170 papers)
  4. Chenchen Zhang (19 papers)
  5. Xingwei Qu (30 papers)
  6. Yinghao Ma (24 papers)
  7. Feiyu Duan (9 papers)
  8. Zhiqi Bai (5 papers)
  9. Jiakai Wang (33 papers)
  10. Yuanxing Zhang (30 papers)
  11. Xu Tan (164 papers)
  12. Jie Fu (229 papers)
  13. Wenbo Su (36 papers)
  14. Jiamang Wang (12 papers)
  15. Lin Qu (10 papers)
  16. Bo Zheng (205 papers)
Citations (10)