Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The effect of data encoding on the expressive power of variational quantum machine learning models (2008.08605v2)

Published 19 Aug 2020 in quant-ph and stat.ML

Abstract: Quantum computers can be used for supervised learning by treating parametrised quantum circuits as models that map data inputs to predictions. While a lot of work has been done to investigate practical implications of this approach, many important theoretical properties of these models remain unknown. Here we investigate how the strategy with which data is encoded into the model influences the expressive power of parametrised quantum circuits as function approximators. We show that one can naturally write a quantum model as a partial Fourier series in the data, where the accessible frequencies are determined by the nature of the data encoding gates in the circuit. By repeating simple data encoding gates multiple times, quantum models can access increasingly rich frequency spectra. We show that there exist quantum models which can realise all possible sets of Fourier coefficients, and therefore, if the accessible frequency spectrum is asymptotically rich enough, such models are universal function approximators.

Citations (435)

Summary

  • The paper demonstrates that variational quantum models can be represented as Fourier series, linking data encoding to frequency control.
  • The methodology shows that encoding Hamiltonian eigenvalues dictates the accessible frequency spectrum for function approximation.
  • The paper highlights that repeated data encoding enhances expressivity, suggesting practical guidelines for designing universal quantum models.

The Effect of Data Encoding on the Expressive Power of Variational Quantum Machine Learning Models

The paper presented focuses on a fundamental aspect of variational quantum machine learning (QML) models, which leverage parametrized quantum circuits similar to neural networks. The paper investigates how data encoding strategies affect the expressive power of these quantum models in their capacity as function approximators. The primary contribution is demonstrating that quantum models can be framed as partial Fourier series, with their expressivity contingent upon the frequencies dictated by data encoding.

Summary of Core Contributions

The authors employ a theoretical framework to analyze quantum models' function classes, concentrating on data encoding's role. Several contributions stand out:

  1. Fourier Series Representation: The paper shows that variational quantum models can be represented as Fourier sums. By encoding data as gates in quantum circuits, the frequencies accessible in the Fourier expansion are shaped by the eigenvalues of these gates. The control of Fourier coefficients is governed by the trainable components of the quantum circuit.
  2. Frequency Spectrum Characterization: It is established that the frequency spectrum of these models is determined solely by the eigenvalues of the data-encoding Hamiltonians. This provides a pathway to characterize the function families that quantum models can learn.
  3. Repetition Enhances Expressivity: By repetitively applying data encoding—either by parallel or sequential rotations—quantum models can enhance their frequency access, thereby increasing expressivity. Single-qubit Pauli rotations that are repeated allow the models access to richer frequency spectra.
  4. Universality of Quantum Models: The paper asserts that with a sufficiently flexible circuit architecture, quantum models possess the potential for asymptotic universality, indicating they can approximate any square-integrable function given a sufficiently rich frequency spectrum.

Implications and Speculative Insights

The implications of these findings are significant, particularly in guiding the design of quantum models:

  • The ability of a quantum model to express complex functions is intricately linked to how data is encoded, emphasizing the need for strategic encoding decisions in quantum algorithm design.
  • Practical guidelines emerge, suggesting repetition strategies for data encoding and consideration of classical pre-processing to enhance model expressivity.
  • The paper implicitly points towards time-series tasks and signal processing as potential areas where quantum models could offer particular advantages due to their inherent periodicity capabilities.

Future Directions in Quantum AI

Considering the established results, several future avenues arise:

  • Integrating classical data pre-processing with quantum encodings poses a promising area for enhancing the function approximating power of quantum models without excessive resource demands.
  • The universality theorem, while requiring deep circuits, suggests a line of inquiry into more depth-efficient architectures that maintain universality.

This paper lays foundational groundwork by elucidating how quantum models' expressivity is modulated by encoding methods, providing theoretical insights that motivate practical enhancements in quantum machine learning designs. The framework proposed has applications beyond current quantum technologies, potentially shaping the trajectory of future AI research as quantum hardware capabilities develop.

Youtube Logo Streamline Icon: https://streamlinehq.com