Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 174 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Universality and kernel-adaptive training for classically trained, quantum-deployed generative models (2510.08476v1)

Published 9 Oct 2025 in quant-ph

Abstract: The instantaneous quantum polynomial (IQP) quantum circuit Born machine (QCBM) has been proposed as a promising quantum generative model over bitstrings. Recent works have shown that the training of IQP-QCBM is classically tractable w.r.t. the so-called Gaussian kernel maximum mean discrepancy (MMD) loss function, while maintaining the potential of a quantum advantage for sampling itself. Nonetheless, the model has a number of aspects where improvements would be important for more general utility: (1) the basic model is known to be not universal - i.e. it is not capable of representing arbitrary distributions, and it was not known whether it is possible to achieve universality by adding hidden (ancillary) qubits; (2) a fixed Gaussian kernel used in the MMD loss can cause training issues, e.g., vanishing gradients. In this paper, we resolve the first question and make decisive strides on the second. We prove that for an $n$-qubit IQP generator, adding $n + 1$ hidden qubits makes the model universal. For the latter, we propose a kernel-adaptive training method, where the kernel is adversarially trained. We show that in the kernel-adaptive method, the convergence of the MMD value implies weak convergence in distribution of the generator. We also analytically analyze the limitations of the MMD-based training method. Finally, we verify the performance benefits on the dataset crafted to spotlight improvements by the suggested method. The results show that kernel-adaptive training outperforms a fixed Gaussian kernel in total variation distance, and the gap increases with the dataset dimensionality. These modifications and analyses shed light on the limits and potential of these new quantum generative methods, which could offer the first truly scalable insights in the comparative capacities of classical versus quantum models, even without access to scalable quantum computers.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.