Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Multi-granular Quantized Embeddings for Large-Vocab Categorical Features in Recommender Systems (2002.08530v2)

Published 20 Feb 2020 in cs.IR

Abstract: Recommender system models often represent various sparse features like users, items, and categorical features via embeddings. A standard approach is to map each unique feature value to an embedding vector. The size of the produced embedding table grows linearly with the size of the vocabulary. Therefore, a large vocabulary inevitably leads to a gigantic embedding table, creating two severe problems: (i) making model serving intractable in resource-constrained environments; (ii) causing overfitting problems. In this paper, we seek to learn highly compact embeddings for large-vocab sparse features in recommender systems (recsys). First, we show that the novel Differentiable Product Quantization (DPQ) approach can generalize to recsys problems. In addition, to better handle the power-law data distribution commonly seen in recsys, we propose a Multi-Granular Quantized Embeddings (MGQE) technique which learns more compact embeddings for infrequent items. We seek to provide a new angle to improve recommendation performance with compact model sizes. Extensive experiments on three recommendation tasks and two datasets show that we can achieve on par or better performance, with only ~20% of the original model size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wang-Cheng Kang (16 papers)
  2. Derek Zhiyuan Cheng (12 papers)
  3. Ting Chen (148 papers)
  4. Xinyang Yi (24 papers)
  5. Dong Lin (15 papers)
  6. Lichan Hong (35 papers)
  7. Ed H. Chi (74 papers)
Citations (48)

Summary

We haven't generated a summary for this paper yet.