Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fairness in Ranking: Robustness through Randomization without the Protected Attribute (2403.19419v1)

Published 28 Mar 2024 in cs.LG, cs.AI, and cs.CY

Abstract: There has been great interest in fairness in machine learning, especially in relation to classification problems. In ranking-related problems, such as in online advertising, recommender systems, and HR automation, much work on fairness remains to be done. Two complications arise: first, the protected attribute may not be available in many applications. Second, there are multiple measures of fairness of rankings, and optimization-based methods utilizing a single measure of fairness of rankings may produce rankings that are unfair with respect to other measures. In this work, we propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute. In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Fairness and machine learning: Limitations and opportunities. MIT Press, 2023.
  2. Fairness in forecasting of observations of linear dynamical systems. Journal of Artificial Intelligence Research, 76:1247–1280, 2023.
  3. Fairness in reinforcement learning. In International conference on machine learning, pages 1617–1626. PMLR, 2017.
  4. Fairness in ranking, part i: Score-based ranking. ACM Comput. Surv., 55(6), dec 2022.
  5. Fairness in ranking, part ii: Learning-to-rank and recommender systems. ACM Comput. Surv., 55(6), dec 2022.
  6. Tie-Yan Liu et al. Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval, 3(3):225–331, 2009.
  7. Equalized odds postprocessing under imperfect group information. In International conference on artificial intelligence and statistics, pages 1770–1780. PMLR, 2020.
  8. Robust optimization for fairness with noisy protected groups. Advances in neural information processing systems, 33:5190–5203, 2020.
  9. Fairness without demographics through knowledge distillation. Advances in Neural Information Processing Systems, 35:19152–19164, 2022.
  10. Towards fair classifiers without sensitive attributes: Exploring biases in related features. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 1433–1442, 2022.
  11. Retiring δ⁢{D⁢P}𝛿𝐷𝑃\delta\{DP\}italic_δ { italic_D italic_P }: New distribution-level metrics for demographic parity. Transactions on Machine Learning Research, 2023.
  12. Group-blind optimal transport to group parity and its constrained variants. arXiv preprint arXiv:2310.11407, 2023.
  13. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
  14. Rank aggregation with proportionate fairness. In Proceedings of the 2022 International Conference on Management of Data, SIGMOD ’22, page 262–275, 2022.
  15. Fairness-aware ranking in search & recommendation systems with application to linkedin talent search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19. ACM, July 2019.
  16. Proportionate progress: A notion of fairness in resource allocation. Algorithmica, 15(6):600–625, 1996.
  17. Ranking with fairness constraints. In Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella, editors, Proc. of the 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), volume 107 of LIPIcs, pages 28:1–28:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018.
  18. Fair rank aggregation. In Proc. of 35th Conference on Advances in Neural Information Processing Systems (NeurIPS 2022), 2022.
  19. Aggregating inconsistent information: Ranking and clustering. J. ACM, 55(5):23:1–23:27, 2008.
  20. How to rank with few errors. In Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007, pages 95–103. ACM, 2007.
  21. Proportional representation in metric spaces and low-distortion committee selection. CoRR, abs/2312.10369, 2023.
  22. Colin L Mallows. Non-null ranking models. i. Biometrika, 44(1/2):114–130, 1957.
  23. Aggregating incomplete and noisy rankings. In International Conference on Artificial Intelligence and Statistics, pages 2278–2286. PMLR, 2021.
  24. Linear label ranking with bounded noise. Advances in Neural Information Processing Systems, 35:15642–15656, 2022.
  25. Label ranking through nonparametric regression. In Proc. of the 39th International Conference on Machine Learning (ICML 2022), volume 162 of Proceedings of Machine Learning Research, pages 6622–6659. PMLR, 2022.
  26. Optimal learning of mallows block model. In Conference on Learning Theory, pages 529–532. PMLR, 2019.
  27. Finding the second-best candidate under the mallows model. Theoretical Computer Science, 929:39–68, 2022.
  28. Hans Hofmann. Statlog (German Credit Data). UCI Machine Learning Repository, 1994. DOI: https://doi.org/10.24432/C5NC77.
  29. Ke Yang and Julia Stoyanovich. Measuring fairness in ranked outputs, 2016.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com