Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Recommender Systems at Scale: Communication-Efficient Model and Data Parallelism (2010.08899v2)

Published 18 Oct 2020 in cs.LG, cs.DC, and stat.ML

Abstract: In this paper, we consider hybrid parallelism -- a paradigm that employs both Data Parallelism (DP) and Model Parallelism (MP) -- to scale distributed training of large recommendation models. We propose a compression framework called Dynamic Communication Thresholding (DCT) for communication-efficient hybrid training. DCT filters the entities to be communicated across the network through a simple hard-thresholding function, allowing only the most relevant information to pass through. For communication efficient DP, DCT compresses the parameter gradients sent to the parameter server during model synchronization. The threshold is updated only once every few thousand iterations to reduce the computational overhead of compression. For communication efficient MP, DCT incorporates a novel technique to compress the activations and gradients sent across the network during the forward and backward propagation, respectively. This is done by identifying and updating only the most relevant neurons of the neural network for each training sample in the data. We evaluate DCT on publicly available natural language processing and recommender models and datasets, as well as recommendation systems used in production at Facebook. DCT reduces communication by at least $100\times$ and $20\times$ during DP and MP, respectively. The algorithm has been deployed in production, and it improves end-to-end training time for a state-of-the-art industrial recommender model by 37\%, without any loss in performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Vipul Gupta (31 papers)
  2. Dhruv Choudhary (16 papers)
  3. Ping Tak Peter Tang (16 papers)
  4. Xiaohan Wei (37 papers)
  5. Xing Wang (191 papers)
  6. Yuzhen Huang (15 papers)
  7. Arun Kejariwal (12 papers)
  8. Kannan Ramchandran (129 papers)
  9. Michael W. Mahoney (233 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.