Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models (2104.05158v7)

Published 12 Apr 2021 in cs.DC, cs.AI, cs.LG, and cs.PF

Abstract: Deep learning recommendation models (DLRMs) are used across many business-critical services at Facebook and are the single largest AI application in terms of infrastructure demand in its data-centers. In this paper we discuss the SW/HW co-designed solution for high-performance distributed training of large-scale DLRMs. We introduce a high-performance scalable software stack based on PyTorch and pair it with the new evolution of Zion platform, namely ZionEX. We demonstrate the capability to train very large DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup in terms of time to solution over previous systems. We achieve this by (i) designing the ZionEX platform with dedicated scale-out network, provisioned with high bandwidth, optimal topology and efficient transport (ii) implementing an optimized PyTorch-based training stack supporting both model and data parallelism (iii) developing sharding algorithms capable of hierarchical partitioning of the embedding tables along row, column dimensions and load balancing them across multiple workers; (iv) adding high-performance core operators while retaining flexibility to support optimizers with fully deterministic updates (v) leveraging reduced precision communications, multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we develop and briefly comment on distributed data ingestion and other supporting services that are required for the robust and efficient end-to-end training in production environments.

Citations (132)

Summary

  • The paper introduces 4D parallelism that partitions training across table, row, column, and data dimensions to optimize embedding computations.
  • The research implements performance optimizations like hybrid kernel fusion, software-managed caching, and compression to reduce memory and compute costs.
  • The co-designed ZionEX hardware, integrated with Neo, delivers up to a 40x throughput improvement for 12 trillion parameter DLRMs using 128 GPUs.

Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models

The paper "Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models" addresses the computational and infrastructural challenges of training large-scale Deep Learning Recommendation Models (DLRMs) at Meta's data centers. DLRMs are prevalent across major online platforms where personalization and recommendation are critical, leading to significant demands on computational resources. The research presents Neo, a co-designed software and hardware solution, to improve the performance and scalability of training DLRMs, incorporating a novel 4D parallelism strategy and a specialized hardware platform, ZionEX.

Core Contributions

The paper makes several key contributions:

  1. 4D Parallelism: The heart of the software solution, this strategy combines table-wise, row-wise, column-wise, and data parallelism to efficiently distribute the training of embedding operators across GPUs. This design addresses the limitations of existing deep learning frameworks that struggle with the scale and communication demands of DLRMs.
  2. Performance Optimizations: Neo includes optimizations such as hybrid kernel fusion, software-managed caching, and quality-preserving compression techniques to enhance embedding computations. These optimizations are crucial for reducing memory overhead and computational costs associated with massive embedding operators.
  3. ZionEX Hardware Platform: Co-designed with the Neo framework, ZionEX leverages a fully connected topology using RDMA over Converged Ethernet (RoCE) to improve inter-node communications necessary for DLRM training. This platform supports various high-performance communication patterns required by the 4D parallelism of Neo.

Numerical Results

A prominent outcome of the research is the achieved performance boost with Neo and ZionEX, demonstrating up to 40 times improved training throughput for DLRMs with 12 trillion parameters using 128 GPUs across 16 ZionEX nodes compared to existing solutions. This acceleration is a significant advancement in handling the computational complexity of DLRMs at production scale.

Practical and Theoretical Implications

Practically, the deployment of Neo and ZionEX can greatly enhance the efficiency of data centers by enabling the training of significantly larger models, facilitating more sophisticated recommendation systems, and potentially improving user experiences across platforms that rely on DLRMs. The research highlights the importance of co-designing hardware and software to meet the rising demands of AI workloads in commercial settings.

Theoretically, the introduction of 4D parallelism provides a new framework for optimizing the balance between computational load and communication overhead in distributed training systems. It sets a precedent for future research on parallelism strategies that can be adapted or expanded to various AI and machine learning models beyond recommendation systems, potentially influencing how AI infrastructure is developed and optimized.

Future Directions

Continuous advancements in both DLRM size and complexity will drive further exploration into more efficient parallelization techniques and even more responsive hardware designs. Neo, paired with ZionEX, could serve as a foundational approach upon which additional optimizations are layered, accommodating newer, more complex models and evolving hardware. Further investigations into hybrid precision training, improved data management strategies, and more adaptive parallelization strategies could be potential areas of focus, ensuring that AI systems remain efficient and scalable in increasingly data-intensive environments. Moreover, there could be continued development in determinate and reproducible training processes to streamline the deployment and debugging phases across extensive AI operations.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com