Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning (2007.09370v1)

Published 18 Jul 2020 in cs.CR, cs.DC, cs.LG, and stat.ML

Abstract: This paper firstly considers the research problem of fairness in collaborative deep learning, while ensuring privacy. A novel reputation system is proposed through digital tokens and local credibility to ensure fairness, in combination with differential privacy to guarantee privacy. In particular, we build a fair and differentially private decentralised deep learning framework called FDPDDL, which enables parties to derive more accurate local models in a fair and private manner by using our developed two-stage scheme: during the initialisation stage, artificial samples generated by Differentially Private Generative Adversarial Network (DPGAN) are used to mutually benchmark the local credibility of each party and generate initial tokens; during the update stage, Differentially Private SGD (DPSGD) is used to facilitate collaborative privacy-preserving deep learning, and local credibility and tokens of each party are updated according to the quality and quantity of individually released gradients. Experimental results on benchmark datasets under three realistic settings demonstrate that FDPDDL achieves high fairness, yields comparable accuracy to the centralised and distributed frameworks, and delivers better accuracy than the standalone framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lingjuan Lyu (131 papers)
  2. Yitong Li (95 papers)
  3. Karthik Nandakumar (57 papers)
  4. Jiangshan Yu (29 papers)
  5. Xingjun Ma (114 papers)
Citations (46)

Summary

We haven't generated a summary for this paper yet.