Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward the Tradeoffs between Privacy, Fairness and Utility in Federated Learning (2311.18190v1)

Published 30 Nov 2023 in cs.LG and cs.AI

Abstract: Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm that guarantees user privacy and prevents the risk of data leakage due to the advantage of the client's local training. Researchers have struggled to design fair FL systems that ensure fairness of results. However, the interplay between fairness and privacy has been less studied. Increasing the fairness of FL systems can have an impact on user privacy, while an increase in user privacy can affect fairness. In this work, on the client side, we use fairness metrics, such as Demographic Parity (DemP), Equalized Odds (EOs), and Disparate Impact (DI), to construct the local fair model. To protect the privacy of the client model, we propose a privacy-protection fairness FL method. The results show that the accuracy of the fair model with privacy increases because privacy breaks the constraints of the fairness metrics. In our experiments, we conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kangkang Sun (1 paper)
  2. Xiaojin Zhang (54 papers)
  3. Xi Lin (135 papers)
  4. Gaolei Li (29 papers)
  5. Jing Wang (740 papers)
  6. Jianhua Li (38 papers)
Citations (3)