Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

2 OLMo 2 Furious (2501.00656v1)

Published 31 Dec 2024 in cs.CL and cs.LG

Abstract: We present OLMo 2, the next generation of our fully open LLMs. OLMo 2 includes dense autoregressive models with improved architecture and training recipe, pretraining data mixtures, and instruction tuning recipes. Our modified model architecture and training recipe achieve both better training stability and improved per-token efficiency. Our updated pretraining data mixture introduces a new, specialized data mix called Dolmino Mix 1124, which significantly improves model capabilities across many downstream task benchmarks when introduced via late-stage curriculum training (i.e. specialized data during the annealing phase of pretraining). Finally, we incorporate best practices from T\"ulu 3 to develop OLMo 2-Instruct, focusing on permissive data and extending our final-stage reinforcement learning with verifiable rewards (RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance to compute, often matching or outperforming open-weight only models like Llama 3.1 and Qwen 2.5 while using fewer FLOPs and with fully transparent training data, code, and recipe. Our fully open OLMo 2-Instruct models are competitive with or surpassing open-weight only models of comparable size, including Qwen 2.5, Llama 3.1 and Gemma 2. We release all OLMo 2 artifacts openly -- models at 7B and 13B scales, both pretrained and post-trained, including their full training data, training code and recipes, training logs and thousands of intermediate checkpoints. The final instruction model is available on the Ai2 Playground as a free research demo.

A Technical Overview of the OLMo 2 LLMs

The paper entitled "OLMo 2 Furious" introduces the OLMo 2 series, a continuation of the OLMo LLMs, aiming to bolster the capabilities of open LLMs within the realms of architecture stability, data curriculum design, and post-training methodologies. OLMo 2 models are designed to match or surpass the performance of other LLMs available with open weights, such as Qwen 2.5 and Llama 3.1, while maintaining fewer computational demands.

Model Architecture and Training Stability

One notable enhancement in OLMo 2 is the improved training stability achieved by refining its model architecture. The authors transitioned from a nonparametric LayerNorm to RMSNorm, implemented reorder norm and QK-norm strategies, and opted for z-loss regularization. These changes, coupled with a switch to initializing every parameter from a normal distribution with a mean of zero and a standard deviation of 0.02, mitigated loss spikes and facilitated more stable training dynamics. These architectural modifications were critical in sustaining training for larger architectures without divergence.

Data Curriculum and Pretraining Strategy

OLMo 2's pretraining involved the innovative Dolmino Mix 1124 data curriculum, which segmented training into two distinct phases. Initially, models underwent pretraining using a high-quality web data conglomerate optimized for token efficiency, followed by a specialized mid-training phase utilizing curated mathematical, academic, and synthetic data. This stratification of data ensured the models acquired both foundational language patterns and advanced subject-specific proficiency, evident through improved performance on math-centric benchmarks like GSM8K.

Noteworthy Computational Efficiency

A stark aspect of OLMo 2 is its position on the Pareto frontier of performance to compute, indicating a well-balanced compromise between model efficiency and performance. The fully open nature of OLMo 2, involving transparent disclosure of training data, code, and methodologies, contributes to its adoption within the research community, allowing for extensive peer verification and utilization.

Instruction Tuning and Post-Training Techniques

The incorporation of OLMo 2-Instruct models enhanced adaptability to downstream tasks through neglecting multilingual support in favor of focusing computational resources on permissive datasets and reinforcement learning with verifiable rewards (RLVR). This multi-stage reinforcement mechanism, present during the concluding stages of training, strategically enhanced the model's performance in instruction-following tasks. The post-training methods adhered to the T pipeline, optimizing models for diverse AI applications.

Future Directions

Looking ahead, the open publication and methodological transparency of OLMo 2 models set a standard for future development within AI. The intricate detailing on pretraining stability and data-driven curriculum design serve as reference points for emerging models aiming to achieve prowess with computationally efficient paradigms. The architecture and training improvements discussed in this paper will likely influence subsequent iterations of open-weight LLMs.

In summary, the OLMo 2 models exemplify advancements in model architecture robustness, training strategies, and post-training methodologies. As future projects look to replicate or surpass these achievements, the groundwork laid by OLMo 2 promises to be a valuable asset for the expansive AI research field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (40)
  1. Team OLMo (1 paper)
  2. Pete Walsh (9 papers)
  3. Luca Soldaini (62 papers)
  4. Dirk Groeneveld (19 papers)
  5. Kyle Lo (73 papers)
  6. Shane Arora (8 papers)
  7. Akshita Bhagia (12 papers)
  8. Yuling Gu (16 papers)
  9. Shengyi Huang (16 papers)
  10. Matt Jordan (12 papers)
  11. Nathan Lambert (37 papers)
  12. Dustin Schwenk (15 papers)
  13. Oyvind Tafjord (49 papers)
  14. Taira Anderson (3 papers)
  15. David Atkinson (33 papers)
  16. Faeze Brahman (47 papers)
  17. Christopher Clark (27 papers)
  18. Pradeep Dasigi (29 papers)
  19. Nouha Dziri (40 papers)
  20. Michal Guerquin (4 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. 2 OLMo 2 Furious (4 points, 1 comment)
Reddit Logo Streamline Icon: https://streamlinehq.com

Reddit

  1. 2 OLMo 2 Furious (143 points, 35 comments)
  2. 2 OLMo 2 Furious (8 points, 0 comments)