Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops (2308.05902v2)

Published 11 Aug 2023 in cs.IR

Abstract: Multi-stakeholder recommender systems involve various roles, such as users, and providers. Previous work pointed out that max-min fairness (MMF) is a better metric to support weak providers. However, when considering MMF, the features or parameters of these roles vary over time, how to ensure long-term provider MMF has become a significant challenge. We observed that recommendation feedback loops (named RFL) will greatly influence the provider MMF in the long term. RFL means that recommender systems can only receive feedback on exposed items from users and update recommender models incrementally based on this feedback. When utilizing the feedback, the recommender model will regard the unexposed items as negative. In this way, the tail provider will not get the opportunity to be exposed, and its items will always be considered negative samples. Such phenomena will become more and more serious in RFL. To alleviate the problem, this paper proposes an online ranking model named Long-Term Provider Max-min Fairness (named LTP-MMF). Theoretical analysis shows that the long-term regret of LTP-MMF enjoys a sub-linear bound. Experimental results on three public recommendation benchmarks demonstrated that LTP-MMF can outperform the baselines in the long term.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chen Xu (186 papers)
  2. Xiaopeng Ye (6 papers)
  3. Jun Xu (398 papers)
  4. Xiao Zhang (435 papers)
  5. Weiran Shen (24 papers)
  6. Ji-Rong Wen (299 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.