Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search (1905.01989v3)

Published 30 Apr 2019 in cs.IR and cs.LG

Abstract: We present a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results. For a given search or recommendation task, our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be tailored to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations over different parameter choices, and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice. Our approach resulted in tremendous improvement in the fairness metrics (nearly three fold increase in the number of search queries with representative results) without affecting the business metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 630M LinkedIn members.

Fairness-Aware Ranking in Search and Recommendation Systems: An Application to LinkedIn Talent Search

The paper by Sahin Cem Geyik, Stuart Ambler, and Krishnaram Kenthapadi presents a seminal framework aimed at quantifying and mitigating algorithmic bias within ranking mechanisms used in large-scale search and recommendation systems, specifically in the context of LinkedIn's Talent Search. In response to increasing concerns about biases inherent in machine learning models used for ranking individuals, the authors propose both a theoretical and practical approach to address these issues.

The framework is centered around measures to quantify bias based on protected attributes such as gender and age, complemented by algorithms for re-ranking results to improve fairness. Key to the approach is the notion of a "desired distribution" over the protected attributes that serves as the baseline for fairness adjustments. The desired distribution can be tailored to meet fairness criteria like equality of opportunity or demographic parity, facilitating adaptability across various use cases and fairness definitions.

The framework's utility and effectiveness are demonstrated through extensive simulations and real-world deployment at LinkedIn. Simulations assess the algorithms across a broad spectrum of scenarios, which explore different attributes and parameter choices. These simulations reveal that the proposed algorithms significantly improve fairness metrics without severely compromising utility, as measured by normalized discounted cumulative gain (NDCG).

The authors introduce several algorithms: DetGreedy, DetCons, DetRelaxed, and DetConstSort, each with varying approaches to optimizing fairness while maintaining ranking utility. Notably, the DetConstSort algorithm is proven to be feasible, ensuring compliance with fairness constraints in every scenario simulated. This is an essential feature for deployment in systems with diverse datasets and stakeholder requirements.

The paper also includes insights from a large-scale deployment of the framework in LinkedIn's search systems. The A/B testing results indicate a nearly threefold increase in the number of representative search queries without impacting key business metrics. This suggests the practical applicability of the framework, marking it as a pioneering effort in deploying fairness-aware ranking at a significant scale in the recruitment domain.

The paper's implications are notable both theoretically and practically. Theoretically, it contributes to the ongoing discourse on algorithmic fairness, offering a robust method to align machine learning outcomes with societal fairness standards. Practically, it provides a scalable solution applicable to web-scale search and recommendation systems, addressing real-world biases in automated decision-making processes.

Reflecting on future work, the paper hints at exploring the social dimensions of fairness, particularly regarding how desired fairness outcomes are defined and the means by which sensitive attribute information is gathered responsibly. Additionally, further investigations could focus on exploring a broader set of fairness-aware algorithms and their implications in diverse application areas.

In conclusion, this paper underscores the importance of integrating fairness considerations into machine learning-driven systems. It provides a concrete framework and algorithms that not only enhance fairness but do so with a level of pragmatism that facilitates scalable deployment, paving the way for future research and development in fairness-aware technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sahin Cem Geyik (8 papers)
  2. Stuart Ambler (5 papers)
  3. Krishnaram Kenthapadi (42 papers)
Citations (359)