Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UP5: Unbiased Foundation Model for Fairness-aware Recommendation (2305.12090v2)

Published 20 May 2023 in cs.IR, cs.AI, cs.CL, and cs.LG

Abstract: Recent advances in Foundation Models such as LLMs have propelled them to the forefront of Recommender Systems (RS). Despite their utility, there is a growing concern that LLMs might inadvertently perpetuate societal stereotypes, resulting in unfair recommendations. Since fairness is critical for RS as many users take it for decision-making and demand fulfiLLMent, this paper focuses on user-side fairness for LLM-based recommendation where the users may require a recommender system to be fair on specific sensitive features such as gender or age. In this paper, we dive into the extent of unfairness exhibited by LLM-based recommender models based on both T5 and LLaMA backbones, and discuss appropriate methods for promoting equitable treatment of users in LLM-based recommendation models. We introduce a novel Counterfactually-Fair-Prompt (CFP) method towards Unbiased Foundation mOdels (UFO) for fairness-aware LLM-based recommendation. Experiments are conducted on two real-world datasets, MovieLens-1M and Insurance, and compared with both matching-based and sequential-based fairness-aware recommendation models. Results show that CFP achieves better recommendation performance with a high level of fairness. Data and code are open-sourced at https://github.com/agiresearch/UP5.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenyue Hua (51 papers)
  2. Yingqiang Ge (36 papers)
  3. Shuyuan Xu (31 papers)
  4. Jianchao Ji (14 papers)
  5. Yongfeng Zhang (163 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.