Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Personalized Re-ranking for Recommendation (1904.06813v3)

Published 15 Apr 2019 in cs.IR and cs.AI

Abstract: Ranking is a core task in recommender systems, which aims at providing an ordered list of items to users. Typically, a ranking function is learned from the labeled dataset to optimize the global performance, which produces a ranking score for each individual item. However, it may be sub-optimal because the scoring function applies to each item individually and does not explicitly consider the mutual influence between items, as well as the differences of users' preferences or intents. Therefore, we propose a personalized re-ranking model for recommender systems. The proposed re-ranking model can be easily deployed as a follow-up modular after any ranking algorithm, by directly using the existing ranking feature vectors. It directly optimizes the whole recommendation list by employing a transformer structure to efficiently encode the information of all items in the list. Specifically, the Transformer applies a self-attention mechanism that directly models the global relationships between any pair of items in the whole list. We confirm that the performance can be further improved by introducing pre-trained embedding to learn personalized encoding functions for different users. Experimental results on both offline benchmarks and real-world online e-commerce systems demonstrate the significant improvements of the proposed re-ranking model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Changhua Pei (19 papers)
  2. Yi Zhang (994 papers)
  3. Yongfeng Zhang (163 papers)
  4. Fei Sun (151 papers)
  5. Xiao Lin (181 papers)
  6. Hanxiao Sun (6 papers)
  7. Jian Wu (314 papers)
  8. Peng Jiang (274 papers)
  9. Wenwu Ou (37 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.