2000 character limit reached
Zyda-2: a 5 Trillion Token High-Quality Dataset (2411.06068v1)
Published 9 Nov 2024 in cs.CL and cs.AI
Abstract: In this technical report, we present Zyda-2: a five trillion token dataset for LLM pretraining. Zyda-2 was used to train our Zamba2 series of models which are state-of-the-art for their weight class. We build Zyda-2 by collating high-quality open-source tokens such as FineWeb and DCLM, then distilling them to the highest-quality subset via cross-deduplication and model-based quality filtering. Zyda-2 is released under a permissive open license, and is available at https://huggingface.co/datasets/Zyphra/Zyda-2
Collections
Sign up for free to add this paper to one or more collections.