Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fair Matrix Factorisation for Large-Scale Recommender Systems (2209.04394v2)

Published 9 Sep 2022 in cs.IR

Abstract: Recommender systems are hedged with various requirements, such as ranking quality, optimisation efficiency, and item fairness. Item fairness is an emerging yet impending issue in practical systems. The notion of item fairness requires controlling the opportunity of items (e.g. the exposure) by considering the entire set of rankings recommended for users. However, the intrinsic nature of fairness destroys the separability of optimisation subproblems for users and items, which is an essential property of conventional scalable algorithms, such as implicit alternating least squares (iALS). Few fairness-aware methods are thus available for large-scale item recommendation. Because of the paucity of simple tools for practitioners, unfairness issues would be costly to solve or, at worst, would be abandoned. This study takes a step towards solving real-world unfairness issues by developing a simple and scalable collaborative filtering method for fairness-aware item recommendation. We built a method named fiADMM, which inherits the scalability of iALS and maintains a provable convergence guarantee.

Citations (1)

Summary

We haven't generated a summary for this paper yet.