Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Frequency-aware Software Cache for Large Recommendation System Embeddings (2208.05321v1)

Published 8 Aug 2022 in cs.IR, cs.AI, cs.DC, and cs.LG

Abstract: Deep learning recommendation models (DLRMs) have been widely applied in Internet companies. The embedding tables of DLRMs are too large to fit on GPU memory entirely. We propose a GPU-based software cache approaches to dynamically manage the embedding table in the CPU and GPU memory space by leveraging the id's frequency statistics of the target dataset. Our proposed software cache is efficient in training entire DLRMs on GPU in a synchronized update manner. It is also scaled to multiple GPUs in combination with the widely used hybrid parallel training approaches. Evaluating our prototype system shows that we can keep only 1.5% of the embedding parameters in the GPU to obtain a decent end-to-end training speed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jiarui Fang (16 papers)
  2. Geng Zhang (3 papers)
  3. Jiatong Han (5 papers)
  4. Shenggui Li (13 papers)
  5. Zhengda Bian (5 papers)
  6. Yongbin Li (128 papers)
  7. Jin Liu (151 papers)
  8. Yang You (173 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.