Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning (2205.11584v2)

Published 23 May 2022 in cs.LG and cs.CR

Abstract: Group fairness ensures that the outcome of ML based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients' data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sikha Pentyala (11 papers)
  2. Nicola Neophytou (5 papers)
  3. Anderson Nascimento (7 papers)
  4. Martine De Cock (30 papers)
  5. Golnoosh Farnadi (44 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.