Towards Effective and Efficient Continual Pre-training of Large Language Models (2407.18743v1)
Abstract: Continual pre-training (CPT) has been an important approach for adapting LLMs to specific domains or tasks. To make the CPT approach more traceable, this paper presents a technical report for continually pre-training Llama-3 (8B), which significantly enhances the Chinese language ability and scientific reasoning ability of the backbone model. To enhance the new abilities while retaining the original abilities, we design specific data mixture and curriculum strategies by utilizing existing datasets and synthesizing high-quality datasets. Specifically, we synthesize multidisciplinary scientific question and answer (QA) pairs based on related web pages, and subsequently incorporate these synthetic data to improve the scientific reasoning ability of Llama-3. We refer to the model after CPT as Llama-3-SynE (Synthetic data Enhanced Llama-3). We also present the tuning experiments with a relatively small model -- TinyLlama, and employ the derived findings to train the backbone model. Extensive experiments on a number of evaluation benchmarks show that our approach can largely improve the performance of the backbone models, including both the general abilities (+8.81 on C-Eval and +6.31 on CMMLU) and the scientific reasoning abilities (+12.00 on MATH and +4.13 on SciEval), without hurting the original capacities. Our model, data, and codes are available at https://github.com/RUC-GSAI/Llama-3-SynE.
- Jie Chen (602 papers)
- Zhipeng Chen (46 papers)
- Jiapeng Wang (22 papers)
- Kun Zhou (217 papers)
- Yutao Zhu (63 papers)
- Jinhao Jiang (25 papers)
- Yingqian Min (14 papers)
- Wayne Xin Zhao (196 papers)
- Zhicheng Dou (113 papers)
- Jiaxin Mao (47 papers)
- Yankai Lin (125 papers)
- Ruihua Song (48 papers)
- Jun Xu (397 papers)
- Xu Chen (413 papers)
- Rui Yan (250 papers)
- Zhewei Wei (68 papers)
- Di Hu (88 papers)
- Wenbing Huang (95 papers)
- Ji-Rong Wen (299 papers)