Adaptive Fair Representation Learning for Personalized Fairness in Recommendations via Information Alignment (2404.07494v2)
Abstract: Personalized fairness in recommendations has been attracting increasing attention from researchers. The existing works often treat a fairness requirement, represented as a collection of sensitive attributes, as a hyper-parameter, and pursue extreme fairness by completely removing information of sensitive attributes from the learned fair embedding, which suffer from two challenges: huge training cost incurred by the explosion of attribute combinations, and the suboptimal trade-off between fairness and accuracy. In this paper, we propose a novel Adaptive Fair Representation Learning (AFRL) model, which achieves a real personalized fairness due to its advantage of training only one model to adaptively serve different fairness requirements during inference phase. Particularly, AFRL treats fairness requirements as inputs and can learn an attribute-specific embedding for each attribute from the unfair user embedding, which endows AFRL with the adaptability during inference phase to determine the non-sensitive attributes under the guidance of the user's unique fairness requirement. To achieve a better trade-off between fairness and accuracy in recommendations, AFRL conducts a novel Information Alignment to exactly preserve discriminative information of non-sensitive attributes and incorporate a debiased collaborative embedding into the fair embedding to capture attribute-independent collaborative signals, without loss of fairness. Finally, the extensive experiments conducted on real datasets together with the sound theoretical analysis demonstrate the superiority of AFRL.
- McKane Andrus and Sarah Villeneuve. 2022. Demographic-reliant algorithmic fairness: Characterizing the risks of demographic data collection in the pursuit of fairness. In FAT*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT.
- Fairness in recommendation ranking through pairwise comparisons. In SIGKDD.
- Equity of attention: Amortizing individual fairness in rankings. In SIGIR.
- Avishek Bose and William Hamilton. 2019. Compositional fairness constraints for graph embeddings. In ICML.
- Balanced neighborhoods for multi-sided fairness in recommendation. In FAT*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT.
- Wide & deep learning for recommender systems. In DLRS.
- Flexibly fair representation learning by disentanglement. In International conference on machine learning. PMLR.
- Fairness through awareness. In ITCS.
- All the cool kids, how do they fit in?: Popularity and demographic biases in recommender evaluation and effectiveness. In FAT*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT.
- Fairness-aware explainable recommendation over knowledge graphs. In SIGIR.
- Towards long-term fairness in recommendation. In WSDM.
- Explainable fairness in recommendation. In SIGIR.
- Jointly de-biasing face recognition and demographic attribute estimation. In ECCV.
- Controllable guarantees for fair outcomes via contrastive information estimation. In AAAI.
- Neural collaborative filtering. In WWW.
- UP5: Unbiased Foundation Model for Fairness-aware Recommendation. arXiv preprint arXiv:2305.12090 (2023).
- Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In ICDM.
- An empirical study of rich subgroup fairness for machine learning. In FAT*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT.
- Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
- Counterfactual fairness. Advances in neural information processing systems 30 (2017).
- Leave no user behind: Towards improving the utility of recommender systems for non-mainstream users. In WSDM.
- Accurate fairness: Improving individual fairness without trading accuracy. In AAAI.
- User-oriented fairness in recommendation. In WWW.
- Towards personalized fairness based on causal notion. In SIGIR.
- Variational autoencoders for collaborative filtering. In WWW.
- A pareto-efficient algorithm for multiple objective optimization in e-commerce recommendation. In Recsys.
- Mitigating confounding bias in recommendation via information bottleneck. In Recsys.
- Balancing between accuracy and fairness for interactive recommendation with reinforcement learning. In PAKDD.
- Andriy Mnih and Russ R Salakhutdinov. 2007. Probabilistic matrix factorization. In NeurIPS.
- Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In AAAI.
- Dana Pessach and Erez Shmueli. 2022. A review on fairness in machine learning. ACM Computing Surveys (CSUR) 55, 3 (2022), 1–44.
- Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. In WSDM.
- Fairness by learning orthogonal disentangled representations. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16. Springer.
- Ravid Shwartz-Ziv and Naftali Tishby. 2017. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810 (2017).
- BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In CIKM.
- Fairness-aware news recommendation with decomposed adversarial learning. In AAAI.
- Selective fairness in recommendation via prompts. In SIGIR.
- Fairness-aware group recommendation with pareto-efficiency. In Recsys.
- Sirui Yao and Bert Huang. 2017. Beyond parity: Fairness objectives for collaborative filtering. In NeurIPS.
- Fair representation learning for recommendation: a mutual information perspective. In AAAI.
- Fairness-aware tensor-based recommendation. In CIKM.
- Fairness among New Items in Cold Start Recommender Systems. In SIGIR.
- Lilin Zhang (5 papers)
- Ning Yang (49 papers)
- Xinyu Zhu (29 papers)