Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vertical Federated Learning: Challenges, Methodologies and Experiments (2202.04309v2)

Published 9 Feb 2022 in cs.LG and cs.DC

Abstract: Recently, federated learning (FL) has emerged as a promising distributed ML technology, owing to the advancing computational and sensing capacities of end-user devices, however with the increasing concerns on users' privacy. As a special architecture in FL, vertical FL (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients. These sub-models are trained locally by vertically partitioned data with distinct attributes. Therefore, the design of VFL is fundamentally different from that of conventional FL, raising new and unique research issues. In this paper, we aim to discuss key challenges in VFL with effective solutions, and conduct experiments on real-life datasets to shed light on these issues. Specifically, we first propose a general framework on VFL, and highlight the key differences between VFL and conventional FL. Then, we discuss research challenges rooted in VFL systems under four aspects, i.e., security and privacy risks, expensive computation and communication costs, possible structural damage caused by model splitting, and system heterogeneity. Afterwards, we develop solutions to addressing the aforementioned challenges, and conduct extensive experiments to showcase the effectiveness of our proposed solutions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kang Wei (41 papers)
  2. Jun Li (778 papers)
  3. Chuan Ma (35 papers)
  4. Ming Ding (219 papers)
  5. Sha Wei (3 papers)
  6. Fan Wu (264 papers)
  7. Guihai Chen (74 papers)
  8. Thilina Ranbaduge (13 papers)
Citations (79)