2000 character limit reached
Technical Report: Competition Solution For BetterMixture (2403.13233v1)
Published 20 Mar 2024 in cs.CL
Abstract: In the era of flourishing large-scale models, the challenge of selecting and optimizing datasets from the vast and complex sea of data, to enhance the performance of LLMs within the constraints of limited computational resources, has become paramount. This paper details our solution for the BetterMixture challenge, which focuses on the fine-tuning data mixing for LLMs. Our approach, which secured third place, incorporates data deduplication, low-level and high-level quality filtering, and diversity selection. The foundation of our solution is Ke-Data-Juicer, an extension of Data-Juicer, demonstrating its robust capabilities in handling and optimizing data for LLMs.
- 2023. Qwen technical report. arXiv preprint arXiv:2309.16609.
- BELLEGroup. 2023. Belle: Be everyone’s large language model engine. https://github.com/LianjiaTech/BELLE.
- 2024. Data-juicer: A one-stop data processing system for large language models. In International Conference on Management of Data.
- 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
- 2023. From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning. arXiv preprint arXiv:2308.12032.
- OpenAI. 2023. Chatgpt: Optimizing language models for dialogue. Blog post.
- 2024. Doremi: Optimizing data mixtures speeds up language model pretraining. Advances in Neural Information Processing Systems, 36.
- 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
- 2022. Glm-130b: An open bilingual pre-trained model.