Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles (2204.05157v1)

Published 11 Apr 2022 in cs.LG and cs.AI

Abstract: A critical concern in data-driven processes is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age. To ensure non-discrimination in learning tasks, knowledge of the group attributes is essential. However, in practice, these attributes may not be available due to legal and ethical requirements. To address this challenge, this paper studies a model that protects the privacy of the individuals' sensitive information while also allowing it to learn non-discriminatory predictors. A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model. The paper analyzes the relation between accuracy, privacy, and fairness, and the experimental evaluation illustrates the benefits of the proposed models on several prediction tasks. In particular, this proposal is the first to allow both scalable and accurate training of private and fair models for very large neural networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Cuong Tran (17 papers)
  2. Keyu Zhu (10 papers)
  3. Ferdinando Fioretto (76 papers)
  4. Pascal Van Hentenryck (168 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.