Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 78 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 451 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Toward Fairness via Maximum Mean Discrepancy Regularization on Logits Space (2402.13061v1)

Published 20 Feb 2024 in cs.CV

Abstract: Fairness has become increasingly pivotal in machine learning for high-risk applications such as machine learning in healthcare and facial recognition. However, we see the deficiency in the previous logits space constraint methods. Therefore, we propose a novel framework, Logits-MMD, that achieves the fairness condition by imposing constraints on output logits with Maximum Mean Discrepancy. Moreover, quantitative analysis and experimental results show that our framework has a better property that outperforms previous methods and achieves state-of-the-art on two facial recognition datasets and one animal dataset. Finally, we show experimental results and demonstrate that our debias approach achieves the fairness condition effectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Mingliang Chen and Min Wu. 2020. Towards threshold invariant fair classification. In Conference on Uncertainty in Artificial Intelligence. PMLR, 560–569.
  2. Achieve Fairness without Demographics for Dermatological Disease Diagnosis. arXiv preprint arXiv:2401.08066 (2024).
  3. Fair Multi-Exit Framework for Facial Attribute Classification. arXiv preprint arXiv:2301.02989 (2023).
  4. Toward Fairness Through Fair Multi-Exit Framework for Dermatological Disease Diagnosis. arXiv preprint arXiv:2306.14518 (2023).
  5. Flexibly fair representation learning by disentanglement. In International conference on machine learning. PMLR, 1436–1445.
  6. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
  7. A kernel two-sample test. The Journal of Machine Learning Research 13, 1 (2012), 723–773.
  8. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016).
  9. Learning fair classifiers with partially annotated group labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10348–10357.
  10. Fair feature distillation for visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12115–12124.
  11. Kaggle. 2013. Dogs vs. Cats. (2013).
  12. Learning not to learn: Training deep neural networks with biased data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9012–9020.
  13. Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15, 2018 (2018), 11.
  14. Gender bias in neural natural language processing. In Logic, Language, and Security. Springer, 189–202.
  15. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 2403–2411.
  16. Fair Contrastive Learning for Facial Attribute Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10389–10398.
  17. Discovering fair representations in the data domain. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8227–8236.
  18. Achieving equalized odds by resampling sensitive attributes. Advances in Neural Information Processing Systems 33 (2020), 361–371.
  19. The Larger the Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices. In Proceedings of the 59th ACM/IEEE Design Automation Conference (San Francisco, California) (DAC ’22). 163–168. https://doi.org/10.1145/3489517.3530427
  20. Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models. In 2023 60th ACM/IEEE Design Automation Conference (DAC). 1–6. https://doi.org/10.1109/DAC56929.2023.10247765
  21. Bernard W Silverman. 2018. Density estimation for statistics and data analysis. Routledge.
  22. Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10379–10388.
  23. Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5810–5818.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube