Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Stacked Auto-Encoders for Fair Representation Learning (2107.12826v1)

Published 27 Jul 2021 in cs.LG and cs.AI

Abstract: Training machine learning models with the only accuracy as a final goal may promote prejudices and discriminatory behaviors embedded in the data. One solution is to learn latent representations that fulfill specific fairness metrics. Different types of learning methods are employed to map data into the fair representational space. The main purpose is to learn a latent representation of data that scores well on a fairness metric while maintaining the usability for the downstream task. In this paper, we propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation. Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Patrik Joslin Kenfack (6 papers)
  2. Adil Mehmood Khan (17 papers)
  3. Rasheed Hussain (26 papers)
  4. S. M. Ahsan Kazmi (7 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.