Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
88 tokens/sec
Gemini 2.5 Pro Premium
40 tokens/sec
GPT-5 Medium
20 tokens/sec
GPT-5 High Premium
26 tokens/sec
GPT-4o
90 tokens/sec
DeepSeek R1 via Azure Premium
73 tokens/sec
GPT OSS 120B via Groq Premium
485 tokens/sec
Kimi K2 via Groq Premium
197 tokens/sec
2000 character limit reached

Bit-bit encoding, optimizer-free training and sub-net initialization: techniques for scalable quantum machine learning (2501.02148v2)

Published 4 Jan 2025 in quant-ph

Abstract: Quantum machine learning for classical data is currently perceived to have a scalability problem due to (i) a bottleneck at the point of loading data into quantum states, (ii) the lack of clarity around good optimization strategies, and (iii) barren plateaus that occur when the model parameters are randomly initialized. In this work, we propose techniques to address all of these issues. First, we present a quantum classifier that encodes both the input and the output as binary strings which results in a model that has no restrictions on expressivity over the encoded data but requires fast classical compression of typical high-dimensional datasets to only the most predictive degrees of freedom. Second, we show that if one parameter is updated at a time, quantum models can be trained without using a classical optimizer in a way that guarantees convergence to a local minimum, something not possible for classical deep learning models. Third, we propose a parameter initialization strategy called sub-net initialization to avoid barren plateaus where smaller models, trained on more compactly encoded data with fewer qubits, are used to initialize models that utilize more qubits. Along with theoretical arguments on efficacy, we demonstrate the combined performance of these methods on subsets of the MNIST dataset for models with an all-to-all connected architecture that use up to 16 qubits in simulation. This allows us to conclude that the loss function consistently decreases as the capability of the model, measured by the number of parameters and qubits, increases, and this behavior is maintained for datasets of varying complexity. Together, these techniques offer a coherent framework for scalable quantum machine learning.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)