Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Item-side Fairness of Large Language Model-based Recommendation System (2402.15215v1)

Published 23 Feb 2024 in cs.IR

Abstract: Recommendation systems for Web content distribution intricately connect to the information access and exposure opportunities for vulnerable populations. The emergence of LLMs-based Recommendation System (LRS) may introduce additional societal challenges to recommendation systems due to the inherent biases in LLMs. From the perspective of item-side fairness, there remains a lack of comprehensive investigation into the item-side fairness of LRS given the unique characteristics of LRS compared to conventional recommendation systems. To bridge this gap, this study examines the property of LRS with respect to item-side fairness and reveals the influencing factors of both historical users' interactions and inherent semantic biases of LLMs, shedding light on the need to extend conventional item-side fairness methods for LRS. Towards this goal, we develop a concise and effective framework called IFairLRS to enhance the item-side fairness of an LRS. IFairLRS covers the main stages of building an LRS with specifically adapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS to fine-tune LLaMA, a representative LLM, on \textit{MovieLens} and \textit{Steam} datasets, and observe significant item-side fairness improvements. The code can be found in https://github.com/JiangM-C/IFairLRS.git.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems. CoRR abs/2308.08434 (2023).
  2. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In RecSys. ACM, 1007–1014.
  3. Large Language Models for Recommendation: Progresses and Future Directions. In SIGIR-AP.
  4. Equity of attention: Amortizing individual fairness in rankings. In SIGIR. 405–414.
  5. Language Models are Few-Shot Learners. In NeurIPS. Curran Associates, Inc., 1877–1901.
  6. Robin Burke. 2017. Multisided Fairness for Recommendation. CoRR abs/1707.00093 (2017).
  7. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805 (2018).
  8. Recommender Systems in the Era of Large Language Models (LLMs). CoRR abs/2307.02046 (2023).
  9. Ada-Ranker: A Data Distribution Adaptive Ranking Paradigm for Sequential Recommendation. In SIGIR. 1599–1610.
  10. Towards long-term fairness in recommendation. In WSDM. 445–453.
  11. Toward Pareto efficient fairness-utility trade-off in recommendation through reinforcement learning. In WSDM. 316–324.
  12. Social Media Recommendation Based on People and Tags. In SIGIR. 194–201.
  13. F. Maxwell Harper and Joseph A. Konstan. 2016. The MovieLens Datasets: History and Context. TIIS (2016), 19:1–19:19.
  14. Reliable medical recommendation systems with patient privacy. In IHI, Tiffany C. Veinot, Ümit V. Çatalyürek, Gang Luo, Henrique Andrade, and Neil R. Smalheiser (Eds.). ACM, 173–182.
  15. Large Language Models are Zero-Shot Rankers for Recommender Systems. CoRR abs/2305.08845 (2023).
  16. Recommendation Independence. In FAT (Proceedings of Machine Learning Research, Vol. 81). PMLR, 187–201.
  17. Wang-Cheng Kang and Julian J. McAuley. 2018. Self-Attentive Sequential Recommendation. In ICDM. IEEE Computer Society, 197–206.
  18. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
  19. Large Language Models are Zero-Shot Reasoners. In NeurIPS.
  20. What’s in a Name? Understanding the Interplay between Titles, Content, and Communities in Social Media. In ICWSM. The AAAI Press.
  21. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. Association for Computational Linguistics, 7871–7880.
  22. FairGAN: GANs-based fairness-aware learning for recommendations with implicit feedback. In WWW. 297–307.
  23. Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. arXiv preprint arXiv:2305.11700 (2023).
  24. Fairness in Recommendation: Foundations, Methods, and Applications. TIST (2023), 1–48.
  25. How Can Recommender Systems Benefit from Large Language Models: A Survey. CoRR abs/2306.05817 (2023).
  26. ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation. CoRR abs/2308.11131 (2023).
  27. Crank up the Volume: Preference Bias Amplification in Collaborative Recommendation. In RecSys. CEUR-WS.org.
  28. A Multi-facet Paradigm to Bridge Large Language Model and Recommendation. CoRR abs/2310.06491 (2023).
  29. Is ChatGPT a Good Recommender? A Preliminary Study. CoRR abs/2304.10149 (2023).
  30. Food Recommendation: Framework, Existing Solutions, and Challenges. TMM 22, 10 (2020), 2659–2671.
  31. Controlling fairness and bias in dynamic learning-to-rank. In SIGIR. 429–438.
  32. A Combined Relevance Feedback Approach for User Recommendation in E-commerce Applications. In ACHI, Ray Jarvis and Cosmin Dini (Eds.). IEEE Computer Society, 209–214.
  33. OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023).
  34. Training language models to follow instructions with human feedback. NeurIPS 35 (2022), 27730–27744.
  35. Machine learned job recommendation. In RecSys, Bamshad Mobasher, Robin D. Burke, Dietmar Jannach, and Gediminas Adomavicius (Eds.). ACM, 325–328.
  36. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
  37. Decoding and Diversity in Machine Translation. CoRR abs/2011.13477 (2020).
  38. Understanding and mitigating the effect of outliers in fair ranking. In WSDM. 861–869.
  39. First the Worst: Finding Better Gender Translations During Beam Search. In Findings of ACL. 3814–3823.
  40. Recommendations as treatments: Debiasing learning and evaluation. In ICML. PMLR, 1670–1679.
  41. Sar-net: a scenario-aware ranking network for personalized fair recommendation in hundreds of travel scenarios. In CIKM. 4094–4103.
  42. Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems. CoRR abs/2312.15599 (2023).
  43. Harald Steck. 2018. Calibrated Recommendations. In RecSys. Association for Computing Machinery, 154–162.
  44. LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023).
  45. News Recommendations from Social Media Opinion Leaders: Effects on Media Trust and Information Seeking. J COMPUT-MEDIAT COMM (2015), 520–535.
  46. Generative Recommendation: Towards Next-generation Recommender Paradigm. CoRR abs/2304.03516 (2023).
  47. Practical compositional fairness: Understanding fairness in multi-component recommender systems. In WSDM. 436–444.
  48. A survey on the fairness of recommender systems. TOIS (2023), 1–43.
  49. Emergent Abilities of Large Language Models. TMLR 2022 (2022).
  50. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In NeurIPS.
  51. A Survey on Large Language Models for Recommendation. CoRR abs/2305.19860 (2023).
  52. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. CoRR abs/2306.10933 (2023).
  53. A Generic Learning Framework for Sequential Recommendation with Distribution Shifts. In SIGIR. ACM, 331–340.
  54. Large Language Model Can Interpret Latent Space of Sequential Recommender. arXiv:2310.20487 [cs.IR]
  55. Sirui Yao and Bert Huang. 2017. Beyond Parity: Fairness Objectives for Collaborative Filtering. In NeurIPS. 2921–2930.
  56. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. In RecSys. ACM, 993–999.
  57. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. CoRR (2023).
  58. Causal intervention for leveraging popularity bias in recommendation. In SIGIR. 11–20.
  59. CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation. CoRR abs/2310.19488 (2023).
  60. A Survey of Large Language Models. CoRR abs/2303.18223 (2023).
  61. The impact of YouTube recommendation system on video views. In SIGCOMM, Mark Allman (Ed.). ACM, 404–410.
  62. Fairness among new items in cold start recommender systems. In SIGIR. 767–776.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Meng Jiang (126 papers)
  2. Keqin Bao (21 papers)
  3. Jizhi Zhang (24 papers)
  4. Wenjie Wang (150 papers)
  5. Zhengyi Yang (24 papers)
  6. Fuli Feng (143 papers)
  7. Xiangnan He (200 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.