Fairness Risks for Group-conditionally Missing Demographics
Abstract: Fairness-aware classification models have gained increasing attention in recent years as concerns grow on discrimination against some demographic groups. Most existing models require full knowledge of the sensitive features, which can be impractical due to privacy, legal issues, and an individual's fear of discrimination. The key challenge we will address is the group dependency of the unavailability, e.g., people of some age range may be more reluctant to reveal their age. Our solution augments general fairness risks with probabilistic imputations of the sensitive features, while jointly learning the group-conditionally missing probabilities in a variational auto-encoder. Our model is demonstrated effective on both image and tabular datasets, achieving an improved balance between accuracy and fairness.
- Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org, 2019. http://www.fairmlbook.org.
- Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20.
- AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943, 2018.
- Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018.
- Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277–292, sep 2010.
- Fair classification with noisy protected attributes: A framework with provable guarantees. In International Conference on Machine Learning (ICML), 2021a.
- Fair classification with adversarial perturbations. In Advances in Neural Information Processing Systems (NeurIPS), 2021b.
- Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019.
- A fair classifier using kernel density estimation. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
- Consumer Financial Protection Bureau. 12 CFR Part 1002 - Equal Credit Opportunity Act (Regulation B), 1002.5 Rules concerning requests for information. 2023.
- Fair transfer learning with missing protected attributes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019.
- Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In International Conference on Web Search and Data Mining (WSDM), 2021.
- Certifying and removing disparate impact. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2015.
- Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11):1544–1547, 2018.
- Proxy fairness. arXiv:1806.11212, 2018.
- Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3315–3323, 2016.
- Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning (ICML), 2018.
- Categorical reparametrization with gumbel-softmax. In International Conference on Learning Representations (ICLR), 2017.
- Group-aware threshold adaptation for fair classification. In National Conference of Artificial Intelligence (AAAI), 2022.
- Fair feature distillation for visual recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12115–12124, 2021.
- Learning fair classifiers with partially annotated group labels. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10348–10357, June 2022.
- Assessing algorithmic fairness with unobserved protected class using data combination. Management Science, 68(3):1959–1981, 2022.
- Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012.
- Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems (NeurIPS), 2014.
- Krumpal, I. Determinants of social desirability bias in sensitive surveys: a literature review. Quality & quantity, 47(4):2025–2047, 2013.
- Fairness without demographics through adversarially reweighted learning. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
- Noise-tolerant fair classification. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
- Does mitigating ml's impact disparity require treatment disparity? In Advances in Neural Information Processing Systems (NeurIPS), 2018.
- Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), pp. 3730–3738, 2015.
- The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations (ICLR), 2017.
- Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning (ICML), pp. 4382–4391. PMLR, 2019.
- A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), 2021.
- The cost of fairness in binary classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018.
- Fair learning with private demographic data. In International Conference on Machine Learning (ICML), 2020.
- Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
- Discovering fair representations in the data domain. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8227–8236, 2019.
- Group fairness with uncertainty in sensitive attributes. arXiv preprint arXiv:2302.08077, 2023.
- Robust optimization for fairness with noisy protected groups. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
- Wikipedia contributors. Fairness (machine learning) — Wikipedia, the free encyclopedia, 2024. URL https://en.wikipedia.org/w/index.php?title=Fairness_(machine_learning)&oldid=1201888401. [Online; accessed 1-February-2024].
- Fairness risk measures. In International Conference on Machine Learning (ICML), 2019.
- Fair class balancing: Enhancing model fairness without observing sensitive attributes. In International Conference on Information and Knowledge Management (CIKM), 2020.
- Latent class-conditional noise model. IEEE Trans. Pattern Anal. Mach. Intell., 45(8):9964–9980, 2023.
- Ethical implications of bias in machine learning. In Hawaii International Conference on System Sciences, 2018.
- Assessing fairness in the presence of missing data. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
- Towards fair classifiers without sensitive attributes: Exploring biases in related features. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022.
- Semi-supervised learning using gaussian fields and harmonic functions. In Proc. Intl. Conf. Machine Learning, 2003.
- Weak proxies are sufficient and preferable for fairness with missing sensitive attributes. arXiv:2210.03175, 2023.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.