Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Causal Intervention for Fairness in Multi-behavior Recommendation (2209.04589v3)

Published 10 Sep 2022 in cs.IR and cs.AI

Abstract: Recommender systems usually learn user interests from various user behaviors, including clicks and post-click behaviors (e.g., like and favorite). However, these behaviors inevitably exhibit popularity bias, leading to some unfairness issues: 1) for items with similar quality, more popular ones get more exposure; and 2) even worse the popular items with lower popularity might receive more exposure. Existing work on mitigating popularity bias blindly eliminates the bias and usually ignores the effect of item quality. We argue that the relationships between different user behaviors (e.g., conversion rate) actually reflect the item quality. Therefore, to handle the unfairness issues, we propose to mitigate the popularity bias by considering multiple user behaviors. In this work, we examine causal relationships behind the interaction generation procedure in multi-behavior recommendation. Specifically, we find that: 1) item popularity is a confounder between the exposed items and users' post-click interactions, leading to the first unfairness; and 2) some hidden confounders (e.g., the reputation of item producers) affect both item popularity and quality, resulting in the second unfairness. To alleviate these confounding issues, we propose a causal framework to estimate the causal effect, which leverages backdoor adjustment to block the backdoor paths caused by the confounders. In the inference stage, we remove the negative effect of popularity and utilize the good effect of quality for recommendation. Experiments on two real-world datasets validate the effectiveness of our proposed framework, which enhances fairness without sacrificing recommendation accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. X. Ma, L. Zhao, G. Huang, Z. Wang, Z. Hu, X. Zhu, and K. Gai, “Entire space multi-task model: An effective approach for estimating post-click conversion rate,” in The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 2018, pp. 1137–1140.
  2. Y. Zhang, F. Feng, X. He, T. Wei, C. Song, G. Ling, and Y. Zhang, “Causal intervention for leveraging popularity bias in recommendation,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 11–20.
  3. H. Steck, “Calibrated recommendations,” in Proceedings of the 12th ACM Conference on Recommender Systems, 2018, p. 154–162.
  4. W. Wang, F. Feng, X. He, X. Wang, and T.-S. Chua, “Deconfounded recommendation for alleviating bias amplification,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021.
  5. P. Gupta, A. Sharma, P. Malhotra, L. Vig, and G. Shroff, “Causer: Causal session-based recommendations for handling popularity bias,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 3048–3052.
  6. W. Zhang, W. Bao, X.-Y. Liu, K. Yang, Q. Lin, H. Wen, and R. Ramezani, “Large-scale causal approaches to debiasizaing post-click conversion rate estimation with multi-task learning,” in Proceedings of The Web Conference 2020, 2020, pp. 2775–2781.
  7. H. Wen, J. Zhang, Y. Wang, F. Lv, W. Bao, Q. Lin, and K. Yang, “Entire space multi-task modeling via post-click behavior decomposition for conversion rate prediction,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 2377–2386.
  8. Y. Saito, S. Yaginuma, Y. Nishino, H. Sakata, and K. Nakata, “Unbiased recommender learning from missing-not-at-random implicit feedback,” in Proceedings of the International Conference on Web Search and Data Mining.   ACM, 2020, pp. 501–509.
  9. Y. Liu, K. Ge, X. Zhang, and L. Lin, “Real-time attention based look-alike model for recommender system,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, p. 2765–2773.
  10. M. Lalmas, J. Lehmann, G. Shaked, F. Silvestri, and G. Tolomei, “Promoting positive post-click experience for in-stream yahoo gemini users,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, pp. 1929–1938.
  11. T. Wei, F. Feng, J. Chen, Z. Wu, J. Yi, and X. He, “Model-agnostic counterfactual reasoning for eliminating popularity bias in recommender system,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 1791–1800.
  12. M. B. Abdullah, “On a robust correlation coefficient,” Journal of the Royal Statistical Society: Series D (The Statistician), vol. 39, no. 4, pp. 455–460, 1990.
  13. J. Hauke and T. Kossowski, “Comparison of values of pearson’s and spearman’s correlation coefficients on the same sets of data,” Quaestiones geographicae, vol. 30, no. 2, p. 87, 2011.
  14. K. Järvelin and J. Kekäläinen, “Cumulated gain-based evaluation of ir techniques,” ACM Transactions on Information Systems (TOIS), vol. 20, no. 4, pp. 422–446, 2002.
  15. M. Morik, A. Singh, J. Hong, and T. Joachims, “Controlling fairness and bias in dynamic learning-to-rank,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020, pp. 429–438.
  16. F. Germano, V. Gómez, and G. Le Mens, “The few-get-richer: a surprising consequence of popularity-based rankings?” in The World Wide Web Conference, 2019, pp. 2764–2770.
  17. M. Richardson, E. Dominowska, and R. Ragno, “Predicting clicks: estimating the click-through rate for new ads,” in Proceedings of the 16th international conference on World Wide Web, 2007, pp. 521–530.
  18. Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009.
  19. L. Al Shalabi, Z. Shaaban, and B. Kasasbeh, “Data mining: A preprocessing engine,” Journal of Computer Science, vol. 2, no. 9, pp. 735–739, 2006.
  20. Y. Li, H. Chen, S. Xu, Y. Ge, and Y. Zhang, “Towards personalized fairness based on causal notion,” in Proceedings of the 44rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, p. 1054–1063.
  21. A. Singh and T. Joachims, “Fairness of exposure in rankings,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2219–2228.
  22. X. He, Y. Zhang, F. Feng, C. Song, L. Yi, G. Ling, and Y. Zhang, “Addressing confounding feature issue for causal recommendation,” ACM Transactions on Information Systems, vol. 41, no. 3, pp. 1–23, 2023.
  23. H. Abdollahpouri, R. Burke, and B. Mobasher, “Controlling popularity bias in learning-to-rank recommendation,” in Proceedings of the eleventh ACM conference on recommender systems, 2017, pp. 42–46.
  24. Y.-J. Park and A. Tuzhilin, “The long tail of recommender systems and how to leverage it,” in Proceedings of the 2008 ACM conference on Recommender systems, 2008, pp. 11–18.
  25. Z. Zhu, Y. He, X. Zhao, Y. Zhang, J. Wang, and J. Caverlee, “Popularity-opportunity bias in collaborative filtering,” in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 2021, pp. 85–93.
  26. Y. Ge, S. Zhao, H. Zhou, C. Pei, F. Sun, W. Ou, and Y. Zhang, “Understanding echo chambers in e-commerce recommender systems,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 2020, pp. 2261–2270.
  27. Z. He, B. Hui, S. Zhang, C. Xiao, T. Zhong, and F. Zhou, “Exploring indirect entity relations for knowledge graph enhanced recommender system,” Expert Systems with Applications, vol. 213, p. 118984, 2023.
  28. T. Schnabel, A. Swaminathan, A. Singh, N. Chandak, and T. Joachims, “Recommendations as treatments: Debiasing learning and evaluation,” in Proceedings of the 33nd International Conference on Machine Learning, 2016, pp. 1670–1679.
  29. Z. Zhu, Y. He, Y. Zhang, and J. Caverlee, “Unbiased implicit recommendation and propensity estimation via combinational joint learning,” in Fourteenth ACM Conference on Recommender Systems, 2020, p. 551–556.
  30. Q. Ai, K. Bi, C. Luo, J. Guo, and W. B. Croft, “Unbiased learning to rank with unbiased propensity estimation,” in The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 2018, pp. 385–394.
  31. G. Xv, C. Lin, H. Li, J. Su, W. Ye, and Y. Chen, “Neutralizing popularity bias in recommendation models,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2623–2628.
  32. Z. Chen, J. Wu, C. Li, J. Chen, R. Xiao, and B. Zhao, “Co-training disentangled domain adaptation network for leveraging popularity bias in recommenders,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 60–69.
  33. L. Xia, Y. Xu, C. Huang, P. Dai, and L. Bo, “Graph meta network for multi-behavior recommendation,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 757–766.
  34. L. Xia, C. Huang, Y. Xu, P. Dai, X. Zhang, H. Yang, J. Pei, and L. Bo, “Knowledge-enhanced hierarchical graph transformer network for multi-behavior recommendation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4486–4493.
  35. B. Jin, C. Gao, X. He, D. Jin, and Y. Li, “Multi-behavior recommendation with graph convolutional networks,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020, pp. 659–668.
  36. Y. He, Z. Wang, P. Cui, H. Zou, Y. Zhang, Q. Cui, and Y. Jiang, “Causpref: Causal preference learning for out-of-distribution recommendation,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 410–421.
  37. Z. Si, X. Han, X. Zhang, J. Xu, Y. Yin, Y. Song, and J.-R. Wen, “A model-agnostic causal learning framework for recommendation using search data,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 224–233.
  38. Y. Wang, D. Liang, L. Charlin, and D. M. Blei, “Causal inference for recommender systems,” in Fourteenth ACM Conference on Recommender Systems, 2020, pp. 426–431.
  39. H. Zou, P. Cui, B. Li, Z. Shen, J. Ma, H. Yang, and Y. He, “Counterfactual prediction for bundle treatment,” Advances in Neural Information Processing Systems, vol. 33, 2020.
  40. W. Wang, F. Feng, X. He, H. Zhang, and T.-S. Chua, “Clicks can be cheating: Counterfactual recommendation for mitigating clickbait issue,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 1288–1297.
  41. Y. Zheng, C. Gao, X. Li, X. He, Y. Li, and D. Jin, “Disentangling user interest and conformity for recommendation with causal embedding,” in Proceedings of the Web Conference 2021, 2021, pp. 2980–2991.
  42. D. Liu, P. Cheng, Z. Dong, X. He, W. Pan, and Z. Ming, “A general knowledge distillation framework for counterfactual recommendation via uniform data,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020, pp. 831–840.
  43. Z. Fu, Y. Xian, R. Gao, J. Zhao, Q. Huang, Y. Ge, S. Xu, S. Geng, C. Shah, Y. Zhang et al., “Fairness-aware explainable recommendation over knowledge graphs,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020, pp. 69–78.
  44. A. Khademi, S. Lee, D. Foley, and V. Honavar, “Fairness in algorithmic decision making: An excursion through the lens of causality,” in The World Wide Web Conference, 2019, pp. 2907–2914.
  45. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva, “Counterfactual fairness,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, p. 4069–4079.
  46. R. Islam, K. N. Keya, Z. Zeng, S. Pan, and J. Foulds, “Debiasing career recommendations with neural fair collaborative filtering,” in Proceedings of the Web Conference 2021, 2021, pp. 3779–3790.
  47. R. Mehrotra, J. McInerney, H. Bouchard, M. Lalmas, and F. Diaz, “Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness & satisfaction in recommendation systems,” in Proceedings of the 27th acm international conference on information and knowledge management, 2018, pp. 2243–2251.
  48. G. K. Patro, A. Biswas, N. Ganguly, K. P. Gummadi, and A. Chakraborty, “Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms,” in Proceedings of The Web Conference 2020, 2020, pp. 1194–1204.
  49. E. Pitoura, G. Koutrika, and K. Stefanidis, “Fairness in rankings and recommenders,” in International Conference on Extending Database Technology, 2020, pp. 651–654.
  50. J. Kang, J. He, R. Maciejewski, and H. Tong, “Inform: Individual fairness on graph mining,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 379–389.
  51. H. Wu, B. Mitra, C. Ma, F. Diaz, and X. Liu, “Joint multisided exposure fairness for recommendation,” arXiv preprint arXiv:2205.00048, 2022.
  52. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd innovations in theoretical computer science conference, 2012, pp. 214–226.
  53. R. Xu, X. Zhang, P. Cui, B. Li, Z. Shen, and J. Xu, “Regulatory instruments for fair personalized pricing,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 4–15.
  54. T. Calders, F. Kamiran, and M. Pechenizkiy, “Building classifiers with independency constraints,” in 2009 IEEE International Conference on Data Mining Workshops.   IEEE, 2009, pp. 13–18.
  55. I. Zliobaite, “On the relation between accuracy and fairness in binary classification,” in The 2nd workshop on Fairness, Accountability, and Transparency in Machine Learning (FATML) at ICML’15, 2017.
  56. Y. Ge, S. Liu, R. Gao, Y. Xian, Y. Li, X. Zhao, C. Pei, F. Sun, J. Ge, W. Ou et al., “Towards long-term fairness in recommendation,” in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 2021, pp. 445–453.
  57. Y. Li, H. Chen, Z. Fu, Y. Ge, and Y. Zhang, “User-oriented fairness in recommendation,” in Proceedings of the Web Conference 2021, 2021, pp. 624–632.
  58. A. Bose and W. Hamilton, “Compositional fairness constraints for graph embeddings,” in International Conference on Machine Learning.   PMLR, 2019, pp. 715–724.
  59. J. Li, Y. Ren, and K. Deng, “Fairgan: Gans-based fairness-aware learning for recommendations with implicit feedback,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 297–307.
  60. K. Yang and J. Stoyanovich, “Measuring fairness in ranked outputs,” in Proceedings of the 29th international conference on scientific and statistical database management, 2017, pp. 1–6.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xi Wang (275 papers)
  2. Wenjie Wang (150 papers)
  3. Wenge Rong (27 papers)
  4. Fuli Feng (143 papers)
  5. Chuantao Yin (4 papers)
  6. Zhang Xiong (17 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.