Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Efficient Privacy-Preserving Training of Quantum Neural Networks by Using Mixed States to Represent Input Data Ensembles (2509.12465v1)

Published 15 Sep 2025 in quant-ph

Abstract: Quantum neural networks (QNNs) are gaining increasing interest due to their potential to detect complex patterns in data by leveraging uniquely quantum phenomena. This makes them particularly promising for biomedical applications. In these applications and in other contexts, increasing statistical power often requires aggregating data from multiple participants. However, sharing data, especially sensitive information like personal genomic sequences, raises significant privacy concerns. Quantum federated learning offers a way to collaboratively train QNN models without exposing private data. However, it faces major limitations, including high communication overhead and the need to retrain models when the task is modified. To overcome these challenges, we propose a privacy-preserving QNN training scheme that utilizes mixed quantum states to encode ensembles of data. This approach allows for the secure sharing of statistical information while safeguarding individual data points. QNNs can be trained directly on these mixed states, eliminating the need to access raw data. Building on this foundation, we introduce protocols supporting multi-party collaborative QNN training applicable across diverse domains. Our approach enables secure QNN training with only a single round of communication per participant, provides high training speed and offers task generality, i.e., new analyses can be conducted without reacquiring information from participants. We present the theoretical foundation of our scheme's utility and privacy protections, which prevent the recovery of individual data points and resist membership inference attacks as measured by differential privacy. We then validate its effectiveness on three different datasets with a focus on genomic studies with an indication of how it can used in other domains without adaptation.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: