Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions (2010.05894v2)

Published 12 Oct 2020 in cs.AR, cs.AI, cs.IR, and cs.LG

Abstract: Deep neural networks are widely used in personalized recommendation systems. Unlike regular DNN inference workloads, recommendation inference is memory-bound due to the many random memory accesses needed to lookup the embedding tables. The inference is also heavily constrained in terms of latency because producing a recommendation for a user must be done in about tens of milliseconds. In this paper, we propose MicroRec, a high-performance inference engine for recommendation systems. MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups. We have implemented the resulting design on an FPGA board including the embedding lookup step as well as the complete inference process. Compared to the optimized CPU baseline (16 vCPU, AVX2-enabled), MicroRec achieves 13.8~14.7x speedup on embedding lookup alone and 2.5$~5.4x speedup for the entire recommendation inference in terms of throughput. As for latency, CPU-based engines needs milliseconds for inferring a recommendation while MicroRec only takes microseconds, a significant advantage in real-time recommendation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Wenqi Jiang (15 papers)
  2. Zhenhao He (4 papers)
  3. Shuai Zhang (319 papers)
  4. Thomas B. Preußer (11 papers)
  5. Kai Zeng (47 papers)
  6. Liang Feng (59 papers)
  7. Jiansong Zhang (16 papers)
  8. Tongxuan Liu (12 papers)
  9. Yong Li (628 papers)
  10. Jingren Zhou (198 papers)
  11. Ce Zhang (215 papers)
  12. Gustavo Alonso (45 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.