Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TÜLU 3: Pushing Frontiers in Open Language Model Post-Training (2411.15124v1)

Published 22 Nov 2024 in cs.CL
TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

Abstract: LLM post-training is applied to refine behaviors and unlock new skills across a wide range of recent LLMs, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce T\"ULU 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. T\"ULU 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With T\"ULU 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the T\"ULU 3 model weights and demo, we release the complete recipe -- including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the T\"ULU 3 approach to more domains.

Overview of "T 3: Pushing Frontiers in Open LLM Post-Training"

The paper introduces T 3, an open-source family of state-of-the-art post-trained LLMs, aimed at bridging the gap between open and closed proprietary counterparts. Central to the work is the comprehensive release of data, code, and training recipes that delineate advanced post-training techniques on Llama 3.1 base models. The authors address the lack of transparency and open resources in the field by not only making their models and training regimens public but also surpassing both open and limited-access closed models like GPT-4o-mini and Claude 3.5-Haiku in performance.

Core Innovations and Techniques

At the core of T 3's methodology are several sophisticated training stages that go beyond conventional instruction finetuning. These stages include:

  1. Supervised Finetuning (SFT): Tailored prompting and completion training that primarily seeks to enhance core skills such as reasoning, math, coding, and instruction following without sacrificing performance in other areas.
  2. Direct Preference Optimization (DPO): A preference tuning approach leveraging on-policy and off-policy data, improved by extensive data scaling and the integration of synthetic preference data generation processes.
  3. Reinforcement Learning with Verifiable Rewards (RLVR): A novel method acting through standard RL mechanisms, emphasizing learning through verifiable tasks like mathematics problem-solving and precise instruction following.

Together, these methodologies form a robust multi-task evaluation framework that aligns training data with development and unseen evaluation benchmarks, fostering transparency and reproducibility in evaluation.

Results and Implications

The comprehensive empirical evaluations demonstrate T 3's superiority over top-tier open models such as Llama 3.1 Instruct and Qwen 2.5 Instruct, with significant improvements across several benchmarks including reasoning, math, and instruction-following tasks. The model's open-source nature and detailed documentation stand to greatly benefit the research community by setting a new standard for transparency and accessibility in model development.

Broader Impact and Future Directions

T 3's open-source release is set to have far-reaching implications for the field of AI LLMs. By providing a clear and reproducible recipe for training high-performance open-weight models, the paper lays the groundwork for future research in post-training and invites the community to innovate upon its findings without the constraints often imposed by closed-source models.

The inclusion of detailed data decontamination processes and instruction-following benchmarks like IFEval-OOD further highlights the need for sophisticated evaluation mechanisms to prevent model overfitting and reinforce instruction-following capabilities. Future research can expand on T 3's methodologies to incorporate multilingual capabilities, long-context understanding, and more complex skill domains, which were acknowledged but beyond the scope of the paper's primary contributions.

In summary, T 3 represents a pivotal stride toward democratizing access to advanced AI models, promising enhancements in both theoretical exploration and practical applications of AI technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (23)
  1. Nathan Lambert (37 papers)
  2. Jacob Morrison (15 papers)
  3. Valentina Pyatkin (34 papers)
  4. Shengyi Huang (16 papers)
  5. Hamish Ivison (14 papers)
  6. Faeze Brahman (47 papers)
  7. Lester James V. Miranda (11 papers)
  8. Alisa Liu (25 papers)
  9. Nouha Dziri (39 papers)
  10. Shane Lyu (2 papers)
  11. Yuling Gu (16 papers)
  12. Saumya Malik (5 papers)
  13. Victoria Graf (3 papers)
  14. Jena D. Hwang (36 papers)
  15. Jiangjiang Yang (5 papers)
  16. Ronan Le Bras (56 papers)
  17. Oyvind Tafjord (49 papers)
  18. Chris Wilhelm (5 papers)
  19. Luca Soldaini (62 papers)
  20. Noah A. Smith (224 papers)
Citations (1)
Youtube Logo Streamline Icon: https://streamlinehq.com