Amplitude-Encoding Variational Quantum Circuits
- Amplitude-Encoding VQCs are defined by embedding normalized classical vectors into quantum state amplitudes using log₂(n) qubits for exponential data compression.
- They employ both exact state preparation through multi-controlled rotations and variational schemes like AAE/ACAE to achieve hardware efficiency.
- Advanced ansätze balance circuit depth, noise sensitivity, and expressivity, making these VQCs practical for quantum machine learning and classification tasks.
Amplitude-Encoding Variational Quantum Circuits (VQCs) offer an exponentially efficient mapping of classical data onto quantum states by embedding a full data vector into the amplitudes of a log₂-dimension quantum register. This paradigm is central to quantum machine learning and quantum variational algorithms but is characterized by deep implementation trade-offs—balancing minimal qubit requirements against increased circuit depth and sensitivity to hardware noise. Recent advances explore both exact state-preparation schemes and variationally optimized hardware-efficient alternatives, extending to complex-valued data and custom class-separating embeddings.
1. Formal Definition and Mathematical Foundations
Amplitude encoding maps a real or complex classical data vector or , normalized such that , into the amplitudes of a quantum state on qubits, with optional padding to reach dimension :
For complex vectors, the same structure holds, and the encoding is defined up to a global phase (Biswas, 18 Mar 2025, Tudisco et al., 1 Aug 2025, Mitsuda et al., 2022). In the Hamming-weight restricted case, an amplitude-encoded state is supported only on basis states of fixed bit count, i.e., for of Hamming weight (Monbroussou et al., 2023).
This encoding achieves maximal data packing—an -dimensional vector is embedded using only qubits, in contrast to feature-wise angle or phase encoding which requires qubits.
2. Quantum Circuit Realizations and Variational Approximate Schemes
Exact State Preparation
Exact amplitude encoding, e.g., via Möttönen's tree of uniformly controlled rotations, accomplishes the map . The required circuit depth scales as , dominated by multi-controlled rotations and CNOTs (Biswas, 18 Mar 2025, Tudisco et al., 1 Aug 2025, Mordacci et al., 19 Sep 2025). Implementations in Qiskit use the initialize instruction, compiling to sequences of and controlled- gates.
Variational and Hybrid Loaders
For near-term hardware, various variational strategies approximate the amplitude encoding:
- Approximate Amplitude Encoding (AAE) employs a shallow parameterized ansatz (alternating layers and entangling CNOTs), trained to minimize the discrepancy (e.g., MMD loss) between measured output distributions (in computational/Hadamard bases) and target amplitude-derived distributions; sign information is captured by incorporating Hadamard-basis measurements and auxiliary qubits if needed (Nakaji et al., 2021, Mitsuda et al., 2022).
- Approximate Complex Amplitude Encoding (ACAE) further generalizes AAE for complex vectors using fidelity-based cost functions estimated efficiently via classical shadows. Gradients are obtained using the parameter-shift rule, with circuit depth scaling polynomially in qubit number (Mitsuda et al., 2022, Truger et al., 2024).
- EnQode clusters normalized data vectors, pre-optimizes an ansatz for each centroid, and at inference initializes parameters to the closest centroid's solution, followed by (optionally) a few gradient steps for fine-tuning (Morgan et al., 22 Aug 2025). This approach achieves high-fidelity embeddings with depth scaling as .
- Hamming-weight preserving ansätze (e.g., using RBS/FBS gates on connected graphs) variationally encode vectors in subspaces of fixed Hamming weight, yielding full expressivity for small but inheriting barren plateau challenges for large support subspaces (Monbroussou et al., 2023).
3. Circuit Depth, Resource Scaling, and Quantum Resource Trade-offs
| Encoding Strategy | Qubit Count | Gate Count / Depth | Fidelity / Comments |
|---|---|---|---|
| Exact amplitude | multiqubit gates | High, but exponential depth | |
| AAE/ACAE/EnQode | , plus ancillae | single/two-qubit | 0.90–0.99 for |
| Angle | rotations | Shallow, less compression |
Amplitude encoding compresses input dimension exponentially in qubit number but at the cost of exponential gate depth in exact schemes. Variational approximations achieve substantial reduction: e.g., with five qubits, EnQode reaches fidelity with circuit depths shallower than exact preparation (Morgan et al., 22 Aug 2025).
Hybrid encoding schemes exploit amplitude encoding on full-size -vectors, reverting to per-qubit angle/phase encoding for the remainder (Biswas, 18 Mar 2025).
4. Integration into Variational Quantum Algorithms and Performance
Amplitude-encoded VQCs are integrated as follows:
- State preparation: Load via exact or approximate circuits.
- Variational layer: A parameterized ansatz (StronglyEntanglingLayer, EfficientSU2, or custom) acts on the encoded register.
- Measurement and classical post-processing: Observables are measured, and results are post-processed by classical optimizers (e.g., COBYLA, L-BFGS-B, Adam).
In supervised learning tasks (classification), amplitude encoding combined with hybrid/expressive ansätze outperforms angle encoding on moderate multiclass problems by 10–40 percentage points provided sufficient circuit depth and noise resilience (Tudisco et al., 1 Aug 2025). In reinforcement learning, amplitude encoding enables state compression and high performance with far fewer parameters and qubits than classical counterparts (Chen et al., 2021).
Empirical quantum resource efficiency has been validated in models such as QRNNs and quantum classifiers, with approximate encoding leading to improved generalization and lower test error, especially when using signal-preserving augmentation techniques (Morgan et al., 22 Aug 2025, Nakaji et al., 2021).
5. Trainability, Barren Plateaus, and Expressivity
The trainability of amplitude-encoding VQCs depends critically on the circuit’s subspace dimension:
- In full Hilbert space (), gradient variance shrinks as (barren plateau).
- Hamming-weight circuits with fixed permit polynomially small gradient variance for constant , hence better trainability (Monbroussou et al., 2023).
- Hardware-efficient ansätze for AAE/ACAE can avoid deep plateaus in small- regimes, but may still encounter local minima or barren regions at scale (Truger et al., 2024, Mitsuda et al., 2022).
- Injected or warm-started solutions (using ACAE, for example) can precondition VQAs, speeding convergence and avoiding trap regions in the optimization landscape (Truger et al., 2024).
6. Extensions: Adaptive and Data-Dependent Encoding Variants
Recent research underscores the limitations of vanilla amplitude encoding for datasets with intrinsic class clusters or complex manifolds:
- Triplet-Loss Encoding: Parameterized data-embedding unitaries, trained with a class-separability-inducing triplet loss, achieve much higher interclass trace distances and substantially better classification accuracy and circuit depth than conventional amplitude encoding, especially on complex, high-dimensional tasks (e.g., MNIST, MedMNIST) (Mordacci et al., 19 Sep 2025).
- Unitary Kernel Method (UKM) and Variational Circuit Realization (VCR): Ansätze-independent kernel learning can be performed directly in amplitude space, with optimal unitaries subsequently variationally compiled to conventional circuit forms. This three-step scheme sets a theoretical upper bound for amplitude-encoded VQC performance (Miyahara et al., 2021).
7. Practical Considerations and Application Guidelines
- When to use amplitude encoding: Optimal when qubit resources are at a premium and data vectors are not excessively large (to limit gate depth) (Chen et al., 2021, Biswas, 18 Mar 2025).
- Approximate encoding: Critical for making amplitude encoding NISQ-compatible. Empirically justified for qubits in QRNNs and SVD estimation (Morgan et al., 22 Aug 2025, Nakaji et al., 2021).
- Hybrid strategies: Mixtures of amplitude and angle/phase encoding combine efficiency (qubit compression) and expressivity, yielding enhanced trainability and classification power (Biswas, 18 Mar 2025).
- Encoding as hyperparameter: Encoding choice (amplitude, angle, hybrid, adaptive/triplet loss) interacts strongly with ansatz depth and optimizer, and must be tuned per-dataset for optimal results (Tudisco et al., 1 Aug 2025).
Overall, amplitude-encoding VQCs remain a foundational tool for high-compression quantum machine learning, subject to ongoing innovation in scalable circuit compilation, variational approximation, and embedding schemes that inject data manifold structure directly into quantum hardware (Biswas, 18 Mar 2025, Tudisco et al., 1 Aug 2025, Morgan et al., 22 Aug 2025, Mitsuda et al., 2022, Mordacci et al., 19 Sep 2025, Truger et al., 2024, Miyahara et al., 2021).