Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts (2112.06905v2)

Published 13 Dec 2021 in cs.CL

Abstract: Scaling LLMs with more data, compute and parameters has driven significant progress in natural language processing. For example, thanks to scaling, GPT-3 was able to achieve strong results on in-context learning tasks. However, training these large dense models requires significant amounts of computing resources. In this paper, we propose and develop a family of LLMs named GLaM (Generalist LLM), which uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants. The largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.

GLaM: Efficient Scaling of LLMs with Mixture-of-Experts

The paper "GLaM: Efficient Scaling of LLMs with Mixture-of-Experts" focuses on developing LLMs using a sparsely activated mixture-of-experts (MoE) approach to enhance scalability while reducing computational demands. The Generalist LLM (GLaM) is proposed, which leverages this architecture to achieve competitive performance with fewer computing resources than traditional dense models.

Key Contributions

GLaM is notable for its impressive scale and efficiency. The largest version of GLaM contains 1.2 trillion parameters, making it approximately seven times larger than GPT-3, yet it only uses one-third of the energy required to train GPT-3 and half the FLOPs in inference. This represents a significant reduction in computational overhead while maintaining superior performance across various NLP benchmarks.

Numerical Results

The paper compares GLaM against GPT-3 over zero, one, and few-shot performance across 29 NLP tasks. GLaM consistently surpasses GPT-3 with improvements of 10.2% in zero-shot, 6.3% in one-shot, and 4.4% in few-shot settings, illustrating its enhanced learning efficiency. These results emphasize GLaM's potential for energy-efficient learning and robust task performance.

Methodology

GLaM's architecture combines dense and conditional computation, utilizing sparsely activated MoE layers where each token activates only a small subset of models' parameters. This novel approach allows GLaM to process data efficiently, activating only 96.6 billion of the model’s 1.2 trillion parameters per input token. Additionally, the inclusion of a robust data quality strategy underpins GLaM’s high performance, demonstrating that data quality is pivotal even at substantial model sizes.

Implications and Future Directions

The introduction of MoE-based architectures, such as GLaM, signals a promising direction towards achieving high-quality NLP models that are both scalable and energy-efficient. Given GLaM’s strong performance and reduced resource demands, future exploration should focus on refining these sparse architectures and improving model parallelism algorithms.

Further investigation into the optimal ratio of data quality to quantity is warranted. Since GLaM shows that quality-enhanced datasets yield better outcomes, this insight could guide how datasets are curated and utilized in future large-scale models. Moreover, the potential for application-specific adaptations of GLaM in contexts such as open-domain question answering or language understanding tasks remains a fertile ground for exploration.

Conclusion

The paper articulates the advantages of employing MoE architectures in LLMs, as seen with GLaM, which achieves significant advancements in scaling efficiency and performance. By reducing computational costs while enhancing efficacy across a suite of NLP tasks, GLaM represents a viable pathway for developing the next generation of LLMs with practical implications in both energy savings and model scalability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (27)
  1. Nan Du (66 papers)
  2. Yanping Huang (40 papers)
  3. Andrew M. Dai (40 papers)
  4. Simon Tong (3 papers)
  5. Dmitry Lepikhin (10 papers)
  6. Yuanzhong Xu (16 papers)
  7. Maxim Krikun (20 papers)
  8. Yanqi Zhou (30 papers)
  9. Adams Wei Yu (23 papers)
  10. Orhan Firat (80 papers)
  11. Barret Zoph (38 papers)
  12. Liam Fedus (4 papers)
  13. Maarten Bosma (10 papers)
  14. Zongwei Zhou (60 papers)
  15. Tao Wang (700 papers)
  16. Yu Emma Wang (9 papers)
  17. Kellie Webster (14 papers)
  18. Marie Pellat (11 papers)
  19. Kevin Robinson (10 papers)
  20. Kathleen Meier-Hellstern (3 papers)
Citations (651)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com