Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Vertical Federated Learning with Secure Aggregation (2305.11236v1)

Published 18 May 2023 in cs.LG, cs.AI, and cs.CR

Abstract: The majority of work in privacy-preserving federated learning (FL) has been focusing on horizontally partitioned datasets where clients share the same sets of features and can train complete models independently. However, in many interesting problems, such as financial fraud detection and disease detection, individual data points are scattered across different clients/organizations in vertical federated learning. Solutions for this type of FL require the exchange of gradients between participants and rarely consider privacy and security concerns, posing a potential risk of privacy leakage. In this work, we present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation. We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 ~3.8e4 speedup compared to homomorphic encryption (HE).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xinchi Qiu (26 papers)
  2. Heng Pan (14 papers)
  3. Wanru Zhao (16 papers)
  4. Chenyang Ma (7 papers)
  5. Pedro Porto Buarque de Gusmão (12 papers)
  6. Nicholas D. Lane (97 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.