Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Selective Fairness in Recommendation via Prompts (2205.04682v2)

Published 10 May 2022 in cs.IR

Abstract: Recommendation fairness has attracted great attention recently. In real-world systems, users usually have multiple sensitive attributes (e.g. age, gender, and occupation), and users may not want their recommendation results influenced by those attributes. Moreover, which of and when these user attributes should be considered in fairness-aware modeling should depend on users' specific demands. In this work, we define the selective fairness task, where users can flexibly choose which sensitive attributes should the recommendation model be bias-free. We propose a novel parameter-efficient prompt-based fairness-aware recommendation (PFRec) framework, which relies on attribute-specific prompt-based bias eliminators with adversarial training, enabling selective fairness with different attribute combinations on sequential recommendation. Both task-specific and user-specific prompts are considered. We conduct extensive evaluations to verify PFRec's superiority in selective fairness. The source codes are released in \url{https://github.com/wyqing20/PFRec}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiqing Wu (10 papers)
  2. Ruobing Xie (97 papers)
  3. Yongchun Zhu (35 papers)
  4. Fuzhen Zhuang (97 papers)
  5. Xiang Ao (33 papers)
  6. Xu Zhang (343 papers)
  7. Leyu Lin (43 papers)
  8. Qing He (88 papers)
Citations (42)
Github Logo Streamline Icon: https://streamlinehq.com