Flexible Fairness-Aware Learning via Inverse Conditional Permutation (2404.05678v3)
Abstract: Equalized odds, as a popular notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm's prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes largely unaddressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.
- A reductions approach to fair classification. In International conference on machine learning, pp. 60–69. PMLR.
- It’s compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks.
- The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society Series B: Statistical Methodology 82(1), 175–197.
- Panning for gold: Model-X knockoffs for high-dimensional controlled variable selection. Journal of the Royal Statistical Society: Series B 80(3), 551–577.
- A clarification of the nuances in the fairness metrics landscape. Scientific Reports 12(1), 4209.
- A fair classifier using kernel density estimation. Advances in neural information processing systems 33, 15088–15099.
- Flexibly fair representation learning by disentanglement. In K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Volume 97 of Proceedings of Machine Learning Research, pp. 1436–1445. PMLR.
- Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214––226.
- Certifying and removing disparate impact. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268.
- Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680.
- Equality of opportunity in supervised learning. Advances in neural information processing systems 29.
- Multicalibration: Calibration for the (Computationally-identifiable) masses. In J. Dy and A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, pp. 1939–1948. PMLR.
- Kernel partial correlation coefficient — a measure of conditional dependence. Journal of Machine Learning Research 23(216), 1–58.
- Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pp. 643–650. IEEE.
- Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In J. Dy and A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, pp. 2564–2572. PMLR.
- Keener, R. W. (2010). Theoretical statistics: Topics for a core course. Springer.
- Multiaccuracy: Black-box post-processing for fairness in classification.
- Counterfactual fairness. In Advances in Neural Information Processing Systems 30, pp. 4066–4076.
- Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pp. 4382–4391.
- A survey on bias and fairness in machine learning. ACM computing surveys (CSUR) 54(6), 1–35.
- Naaman, M. (2021). On the tight constant in the multivariate dvoretzky–kiefer–wolfowitz inequality. Statistics & Probability Letters 173, 109088.
- Masked autoregressive flow for density estimation. Advances in neural information processing systems 30.
- Achieving equalized odds by resampling sensitive attributes. Advances in neural information processing systems 33, 361–371.
- Scott, D. W. (1991). Feasibility of multivariate density estimates. Biometrika 78(1), 197–205.
- Attainability and optimality: The equalized odds fairness revisited. In Conference on Causal Learning and Reasoning, pp. 754–786. PMLR.
- The holdout randomization test for feature selection in black box models. Journal of Computational and Graphical Statistics 31(1), 151–162.
- Algorithmic fairness and bias mitigation for clinical machine learning: Insights from rapid covid-19 diagnosis by adversarial learning. medRxiv.
- Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171––1180.
- Fairness constraints: A flexible approach for fair classification. Journal of Machine Learning Research 20(75), 1–42.
- Learning fair representations. In International conference on machine learning, pp. 325–333. PMLR.
- Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340.
- Yuheng Lai (1 paper)
- Leying Guan (15 papers)