Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Top-Personalized-K Recommendation (2402.16304v1)

Published 26 Feb 2024 in cs.IR

Abstract: The conventional top-K recommendation, which presents the top-K items with the highest ranking scores, is a common practice for generating personalized ranking lists. However, is this fixed-size top-K recommendation the optimal approach for every user's satisfaction? Not necessarily. We point out that providing fixed-size recommendations without taking into account user utility can be suboptimal, as it may unavoidably include irrelevant items or limit the exposure to relevant ones. To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction. As a solution to the proposed task, we develop a model-agnostic framework named PerK. PerK estimates the expected user utility by leveraging calibrated interaction probabilities, subsequently selecting the recommendation size that maximizes this expected utility. Through extensive experiments on real-world datasets, we demonstrate the superiority of PerK in Top-Personalized-K recommendation task. We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. Ranking Interruptus: When Truncated Rankings Are Better and How to Measure That. In SIGIR. 588–598.
  2. Where to stop reading a ranked list? Threshold optimization using truncated score distributions. In SIGIR. 524–531.
  3. Choppy: Cut transformer for ranked list truncation. In SIGIR. 1513–1516.
  4. Patrick Billingsley. 2008. Probability and measure. John Wiley & Sons.
  5. To swing or not to swing: learning when (not) to advertise. In CIKM. 1003–1012.
  6. Aspect-aware latent factor model: Rating prediction with ratings and reviews. In WWW. 639–648.
  7. An experimental comparison of click position-bias models. In WWW. 87–94.
  8. Performance of recommender algorithms on top-n recommendation tasks. In RecSys. 39–46.
  9. Shrey Desai and Greg Durrett. 2020. Calibration of Pre-trained Transformers. In EMNLP. 295–302.
  10. Local temperature scaling for probability calibration. In ICCV. 6889–6899.
  11. On calibration of modern neural networks. In ICML. 1321–1330.
  12. Lightgcn: Simplifying and powering graph convolution network for recommendation. In SIGIR. 639–648.
  13. Neural collaborative filtering. In WWW. 173–182.
  14. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  15. Collaborative filtering for implicit feedback datasets. In ICDM. 263–272.
  16. Trends, problems and solutions of recommender system. In International conference on computing, communication & automation. IEEE, 955–958.
  17. Dietmar Jannach and Michael Jugovac. 2019. Measuring the business value of recommender systems. ACM Transactions on Management Information Systems (TMIS) 10, 4 (2019), 1–23.
  18. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20, 4 (2002), 422–446.
  19. DE-RRD: A Knowledge Distillation Framework for Recommender System. In CIKM.
  20. Semi-supervised learning for cross-domain recommendation to cold-start users. In CIKM.
  21. Distillation from Heterogeneous Models for Top-K Recommendation. In WWW. 801–811.
  22. Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering. In WWW.
  23. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.
  24. Thomas N Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR.
  25. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37.
  26. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In AISTATS. 623–631.
  27. Deep Rating Elicitation for New Users in Collaborative Filtering. In WWW. 2810–2816.
  28. Bidirectional distillation for top-K recommender system. In WWW. 3861–3871.
  29. Obtaining Calibrated Probabilities with Personalized Ranking Models. In AAAI. 4083–4091.
  30. Wonbin Kweon and Hwanjo Yu. 2024. Doubly Calibrated Estimator for Recommendation on Data Missing Not At Random. In WWW.
  31. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML. 1188–1196.
  32. Lucien Le Cam. 1960. An approximation theorem for the Poisson binomial distribution. Pacific J. Math. 10, 4 (1960), 1181–1197.
  33. Bootstrapping User and Item Representations for One-Class Collaborative Filtering. In SIGIR.
  34. Xiaopeng Li and James She. 2017. Collaborative variational autoencoder for recommender systems. In KDD. 305–314.
  35. Variational autoencoders for collaborative filtering. In WWW. 689–698.
  36. An assumption-free approach to the dynamic truncation of ranked lists. In SIGIR. 79–82.
  37. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In KDD. 1930–1939.
  38. Incorporating Retrieval Information into the Truncation of Ranking Lists for Better Legal Search. In SIGIR. 438–448.
  39. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI. 1097–1101.
  40. Revisiting the calibration of modern neural networks. In NeurIPS. 15682–15694.
  41. Obtaining well calibrated probabilities using bayesian binning. In AAAI.
  42. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@NeurIPS.
  43. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP-IJCNLP. 188–197.
  44. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
  45. PyTorch: An imperative style, high-performance deep learning library. In NeurIPS.
  46. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers 10, 3 (1999), 61–74.
  47. David MW Powers. 2011. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies 2 (2011), 37–63. Issue 1.
  48. LETOR: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval 13, 4 (2010), 346–374.
  49. Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, Vol. 242. Citeseer, 29–48.
  50. BPR: Bayesian personalized ranking from implicit feedback. In UAI.
  51. Stephen E Robertson. 1977. The probability ranking principle in IR. Journal of documentation 33, 4 (1977), 209–304.
  52. Yuta Saito and Thorsten Joachims. 2022. Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking. In KDD. 1514–1524.
  53. Unbiased recommender learning from missing-not-at-random implicit feedback. In WSDM. 501–509.
  54. Harvey M Salkin and Cornelis A De Kluyver. 1975. The knapsack problem: a survey. Naval Research Logistics Quarterly 22, 1 (1975), 127–144.
  55. Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In KDD. 2219–2228.
  56. Cross-domain collaboration recommendation. In KDD. 1285–1293.
  57. Overview of the TREC 2007 Legal Track.. In TREC.
  58. Attention is all you need. In NeurIPS.
  59. MtCut: A Multi-Task Framework for Ranked List Truncation. In WSDM. 1054–1062.
  60. Collaborative topic regression with social regularization for tag recommendation. In IJCAI.
  61. Cofirank-maximum margin matrix factorization for collaborative ranking. In NeurIPS. 222–230.
  62. On the effectiveness of video prefetching relying on recommender systems for mobile devices. In 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE, 429–434.
  63. Learning to Truncate Ranked Lists for Information Retrieval. In AAAI.
  64. Self-supervised graph learning for recommendation. In SIGIR. 726–735.
  65. A Bird’s-eye View of Reranking: from List Level to Page Level. In WSDM. 1075–1083.
  66. Song Yao and Carl F Mela. 2011. A dynamic model of sponsored search advertising. Marketing Science 30, 3 (2011), 447–468.
  67. Incorporating Bias-aware Margins into Contrastive Loss for Collaborative Filtering. In NeurIPS.
Citations (5)

Summary

We haven't generated a summary for this paper yet.