Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-aware Reranking with Utility Maximization for Recommendation (2110.09059v2)

Published 18 Oct 2021 in cs.IR

Abstract: As a critical task for large-scale commercial recommender systems, reranking has shown the potential of improving recommendation results by uncovering mutual influence among items. Reranking rearranges items in the initial ranking lists from the previous ranking stage to better meet users' demands. However, rather than considering the context of initial lists as most existing methods do, an ideal reranking algorithm should consider the counterfactual context -- the position and the alignment of the items in the reranked lists. In this work, we propose a novel pairwise reranking framework, Context-aware Reranking with Utility Maximization for recommendation (CRUM), which maximizes the overall utility after reranking efficiently. Specifically, we first design a utility-oriented evaluator, which applies Bi-LSTM and graph attention mechanism to estimate the listwise utility via the counterfactual context modeling. Then, under the guidance of the evaluator, we propose a pairwise reranker model to find the most suitable position for each item by swapping misplaced item pairs. Extensive experiments on two benchmark datasets and a proprietary real-world dataset demonstrate that CRUM significantly outperforms the state-of-the-art models in terms of both relevance-based metrics and utility-based metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yunjia Xi (21 papers)
  2. Weiwen Liu (59 papers)
  3. Xinyi Dai (32 papers)
  4. Ruiming Tang (171 papers)
  5. Weinan Zhang (322 papers)
  6. Qing Liu (196 papers)
  7. Xiuqiang He (97 papers)
  8. Yong Yu (219 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.