Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ensuring User-side Fairness in Dynamic Recommender Systems (2308.15651v2)

Published 29 Aug 2023 in cs.IR, cs.CY, and cs.LG

Abstract: User-side group fairness is crucial for modern recommender systems, aiming to alleviate performance disparities among user groups defined by sensitive attributes like gender, race, or age. In the ever-evolving landscape of user-item interactions, continual adaptation to newly collected data is crucial for recommender systems to stay aligned with the latest user preferences. However, we observe that such continual adaptation often exacerbates performance disparities. This necessitates a thorough investigation into user-side fairness in dynamic recommender systems, an area that has been unexplored in the literature. This problem is challenging due to distribution shifts, frequent model updates, and non-differentiability of ranking metrics. To our knowledge, this paper presents the first principled study on ensuring user-side fairness in dynamic recommender systems. We start with theoretical analyses on fine-tuning v.s. retraining, showing that the best practice is incremental fine-tuning with restart. Guided by our theoretical analyses, we propose FAir Dynamic rEcommender (FADE), an end-to-end fine-tuning framework to dynamically ensure user-side fairness over time. To overcome the non-differentiability of recommendation metrics in the fairness loss, we further introduce Differentiable Hit (DH) as an improvement over the recent NeuralNDCG method, not only alleviating its gradient vanishing issue but also achieving higher efficiency. Besides that, we also address the instability issue of the fairness loss by leveraging the competing nature between the recommendation loss and the fairness loss. Through extensive experiments on real-world datasets, we demonstrate that FADE effectively and efficiently reduces performance disparities with little sacrifice in the overall recommendation performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hyunsik Yoo (4 papers)
  2. Zhichen Zeng (24 papers)
  3. Jian Kang (142 papers)
  4. Zhining Liu (32 papers)
  5. David Zhou (6 papers)
  6. Fei Wang (574 papers)
  7. Eunice Chan (3 papers)
  8. Hanghang Tong (137 papers)
  9. Ruizhong Qiu (22 papers)
  10. Charlie Xu (3 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.