Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Split learning for health: Distributed deep learning without sharing raw patient data (1812.00564v1)

Published 3 Dec 2018 in cs.LG and stat.ML

Abstract: Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN.

Citations (628)

Summary

  • The paper introduces SplitNN, a novel distributed framework that trains health models without exposing raw patient data.
  • It details configurations such as Vanilla, U-shaped, and vertically partitioned setups to address diverse healthcare scenarios.
  • Empirical results show SplitNN reduces client computational load significantly while maintaining high accuracy compared to traditional methods.

Split Learning for Health: Distributed Deep Learning Without Sharing Raw Patient Data

Overview

The paper presents "SplitNN," a distributed deep learning framework tailored for health-related applications, addressing the critical need for collaborative model training without sharing raw patient data. SplitNN circumvents data privacy concerns by allowing entities to engage in joint model development without exposing sensitive information, such as electronic health records (EHR), imaging data, or genetic markers.

Core Contributions

The authors introduce several configurations of SplitNN to accommodate diverse healthcare scenarios:

  1. Simple Vanilla SplitNN: This configuration involves each healthcare entity training part of a neural network up to a designated "cut layer." Outputs from this layer are then transferred to a central server for further processing, facilitating model training without sharing raw data.
  2. U-Shaped Configuration: This setup allows model training without label sharing by wrapping outputs back to client entities, ensuring data privacy even when labels are sensitive.
  3. Vertically Partitioned Data Configuration: Designed for entities holding different data modalities. Outputs from multiple entities are combined and processed centrally, enabling a cohesive model without data exchange.

Comparison with Other Techniques

Compared to federated learning and large batch synchronous SGD, SplitNN shows considerable advantages in resource efficiency. Empirical evaluations using the CIFAR datasets illustrate that SplitNN achieves high accuracy with lower computational demands, particularly at the client side. For instance, SplitNN reduced computational load to 0.1548 TFlops for 100 clients as opposed to the 29.4 TFlops required by other methods.

Implications and Future Directions

The implications of SplitNN for collaborative healthcare are profound. By allowing secure multi-institutional collaboration without compromising data privacy, SplitNN supports the development of robust models from otherwise siloed data sources. Furthermore, its application could extend beyond healthcare into other sensitive areas requiring stringent data privacy.

Future research might explore the integration of SplitNN with neural network compression techniques to further enhance its efficiency, particularly for deployment on edge devices. Additionally, developing novel configurations and refining existing ones could broaden its application scope.

Conclusion

The SplitNN framework offers a promising avenue for distributed learning in privacy-sensitive domains. Its ability to handle multi-modal data and various collaborative setups without data exchange sets it apart from existing distributed learning methodologies, making it a valuable contribution to the field of privacy-preserving machine learning.