Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data-Free Evaluation of User Contributions in Federated Learning (2108.10623v1)

Published 24 Aug 2021 in cs.LG and cs.GT

Abstract: Federated learning (FL) trains a machine learning model on mobile devices in a distributed manner using each device's private data and computing resources. A critical issues is to evaluate individual users' contributions so that (1) users' effort in model training can be compensated with proper incentives and (2) malicious and low-quality users can be detected and removed. The state-of-the-art solutions require a representative test dataset for the evaluation purpose, but such a dataset is often unavailable and hard to synthesize. In this paper, we propose a method called Pairwise Correlated Agreement (PCA) based on the idea of peer prediction to evaluate user contribution in FL without a test dataset. PCA achieves this using the statistical correlation of the model parameters uploaded by users. We then apply PCA to designing (1) a new federated learning algorithm called Fed-PCA, and (2) a new incentive mechanism that guarantees truthfulness. We evaluate the performance of PCA and Fed-PCA using the MNIST dataset and a large industrial product recommendation dataset. The results demonstrate that our Fed-PCA outperforms the canonical FedAvg algorithm and other baseline methods in accuracy, and at the same time, PCA effectively incentivizes users to behave truthfully.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hongtao Lv (7 papers)
  2. Zhenzhe Zheng (36 papers)
  3. Tie Luo (44 papers)
  4. Fan Wu (264 papers)
  5. Shaojie Tang (99 papers)
  6. Lifeng Hua (4 papers)
  7. Rongfei Jia (14 papers)
  8. Chengfei Lv (22 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.