Papers
Topics
Authors
Recent
Search
2000 character limit reached

Technical Report: Competition Solution For BetterMixture

Published 20 Mar 2024 in cs.CL | (2403.13233v1)

Abstract: In the era of flourishing large-scale models, the challenge of selecting and optimizing datasets from the vast and complex sea of data, to enhance the performance of LLMs within the constraints of limited computational resources, has become paramount. This paper details our solution for the BetterMixture challenge, which focuses on the fine-tuning data mixing for LLMs. Our approach, which secured third place, incorporates data deduplication, low-level and high-level quality filtering, and diversity selection. The foundation of our solution is Ke-Data-Juicer, an extension of Data-Juicer, demonstrating its robust capabilities in handling and optimizing data for LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609.
  2. BELLEGroup. 2023. Belle: Be everyone’s large language model engine. https://github.com/LianjiaTech/BELLE.
  3. 2024. Data-juicer: A one-stop data processing system for large language models. In International Conference on Management of Data.
  4. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
  5. 2023. From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning. arXiv preprint arXiv:2308.12032.
  6. OpenAI. 2023. Chatgpt: Optimizing language models for dialogue. Blog post.
  7. 2024. Doremi: Optimizing data mixtures speeds up language model pretraining. Advances in Neural Information Processing Systems, 36.
  8. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
  9. 2022. Glm-130b: An open bilingual pre-trained model.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.