Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Nemotron-4 340B Technical Report (2406.11704v1)

Published 17 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: We release the Nemotron-4 340B model family, including Nemotron-4-340B-Base, Nemotron-4-340B-Instruct, and Nemotron-4-340B-Reward. Our models are open access under the NVIDIA Open Model License Agreement, a permissive model license that allows distribution, modification, and use of the models and its outputs. These models perform competitively to open access models on a wide range of evaluation benchmarks, and were sized to fit on a single DGX H100 with 8 GPUs when deployed in FP8 precision. We believe that the community can benefit from these models in various research studies and commercial applications, especially for generating synthetic data to train smaller LLMs. Notably, over 98% of data used in our model alignment process is synthetically generated, showcasing the effectiveness of these models in generating synthetic data. To further support open research and facilitate model development, we are also open-sourcing the synthetic data generation pipeline used in our model alignment process.

An Analysis of Nemotron-4 340B: A Major Contribution to LLMs

This paper provides a thorough examination of the Nemotron-4 340B models developed by NVIDIA, which include Nemotron-4-340B-Base, Nemotron-4-340B-Instruct, and Nemotron-4-340B-Reward. These models, distinguished by their substantial six-figure parameter count, benchmark efficacy, and novel synthetic data utilization, mark a significant addition to openly accessible LLMs and are licensed under the permissive NVIDIA Open Model License Agreement.

The Nemotron-4 340B models were meticulously trained using a diverse array of 9 trillion tokens from high-quality datasets, which were mainly comprised of synthetically generated data. This approach effectively reduced the dependency on human-annotated data and reflects the emerging utility of synthetic data in enhancing model performance without the proportional increase in data curation expenses. The authors also released the pipeline used for synthetic data generation, intending to empower further research and application in the community.

Encountering impressive results, Nemotron-4 340B-Base displayed strong performance in commonsense reasoning tasks and outperformed comparable models such as Llama-3 70B, Mixtral 8x22B, and Qwen-2 72B on numerous popular benchmarks, including ARC-Challenge, MMLU, and BBH. Moreover, it maintained competitive scores in HumanEval and MBPP, attesting to its versatility across various domains. The Nemotron-4-340B-Instruct model excelled in instruction following tasks, suggesting improvements from fine-tuning methodologies like Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO).

Notably, the Nemotron-4-340B-Reward model achieved the highest accuracy on RewardBench as compared to both open-access and proprietary models, including GPT-4o-0513. Its capacity to differentiate responses in preference fine-tuning solidifies its role in the alignment process, where a large portion of the training data consisted of high-quality responses discerned through advanced synthetic data validation and filtering pipelines.

The advantages of synthetic data utilization are emphasized. By leveraging this method, the authors addressed a growing issue in LLM development—a reliance on computationally intensive and costly human-annotated data. The synthetic data generation pipeline thereby presents a scalable solution for producing aligned models. The ability to generate response diversity and quality is projected to expedite progress in AI applications substantially.

In terms of model architecture and training, Nemotron-4 340B-Base adopts a sophisticated transformer-based design equipped with Rotary Position Embeddings and grouped query attention. It employs a decoder-only configuration enabling high adaptability. The training was performed on an impressive infrastructure with the use of 768 DGX H100 nodes, each with 8 H100 GPUs, reflecting high parallelism that facilitated handling the demanding training requirements of such large models.

Moving forward, these models could significantly impact AI research and application development, readily serving as foundational models for downstream tasks, from natural language understanding to complex reasoning tasks. By making these models and the corresponding training methodologies publicly available, NVIDIA not only contributes back to the research community but also catalyzes future endeavors seeking to harness the capabilities of such potent LLMs. These developments may inspire further explorations into sustainable and efficient LLM training practices, particularly emphasizing synthetic data.

In conclusion, the Nemotron-4 340B model suite represents a judicious balance of architectural innovation, data synthesis, and training discipline, pushing forward the boundary of what open-access LLMs can achieve. It stands out not solely for its performance on typical benchmarks but even more for pioneering data approaches potentially transformative for the field of NLP and AI at large.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (83)
  1. Nvidia (10 papers)
  2. : (643 papers)
  3. Bo Adler (1 paper)
  4. Niket Agarwal (7 papers)
  5. Ashwath Aithal (12 papers)
  6. Dong H. Anh (1 paper)
  7. Pallab Bhattacharya (12 papers)
  8. Annika Brundyn (4 papers)
  9. Jared Casper (11 papers)
  10. Bryan Catanzaro (123 papers)
  11. Sharon Clay (3 papers)
  12. Jonathan Cohen (31 papers)
  13. Sirshak Das (2 papers)
  14. Ayush Dattagupta (3 papers)
  15. Olivier Delalleau (18 papers)
  16. Leon Derczynski (48 papers)
  17. Yi Dong (46 papers)
  18. Daniel Egert (6 papers)
  19. Ellie Evans (6 papers)
  20. Aleksander Ficek (9 papers)
Citations (31)
Youtube Logo Streamline Icon: https://streamlinehq.com