Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantum Complex-Valued Self-Attention Model

Updated 21 March 2026
  • QCSAM is a quantum-native self-attention model that extends classical Transformer attention by leveraging intrinsic quantum phase and amplitude information.
  • It utilizes complex-valued inner products and the Complex Linear Combination of Unitaries (CLCU) to perform phase-sensitive kernel estimation efficiently.
  • Empirical studies demonstrate QCSAM's superior performance on tasks like MNIST, achieving high accuracy with reduced qubit requirements and enhanced multi-head attention.

The Quantum Complex-Valued Self-Attention Model (QCSAM) is a fully quantum-native self-attention architecture that generalizes classical Transformer attention to the quantum domain by leveraging the intrinsic phase and amplitude information present in quantum states. QCSAM is characterized by its use of complex-valued similarities, direct phase-sensitive kernel estimation, and explicit handling of quantum superpositions, enabling expressivity and precision unattainable by purely real or classical approaches. This model structure has been demonstrated to achieve state-of-the-art results on vision and sequence tasks with minimal qubit resources by fully aligning the self-attention paradigm with the mathematical structure of quantum mechanics (Chen et al., 24 Mar 2025).

1. Theoretical Foundations and Motivation

Classical self-attention maps pairs of real vectors in Rdk\mathbb R^{d_k} to a scalar similarity score via the dot-product and softmax: Attention(q,k,v)=softmax(qkdk)v\mathrm{Attention}(q, k, v) = \mathrm{softmax}\left( \frac{qk^\top}{\sqrt{d_k}} \right) v However, this framework neglects quantum phase, as inner products between amplitude-encoded quantum states Q,KC2n\ket{Q}, \ket{K} \in \mathbb C^{2^n} are intrinsically complex. Neglecting phase discards interference and entanglement properties fundamental to quantum advantage in computation and representation. QCSAM addresses this by promoting the attention similarity to the full complex inner product (Chen et al., 24 Mar 2025): S(ψ,ϕ)=ψϕ=j[(ajcj+bjdj)+i(bjcjajdj)]S(\ket{\psi},\ket{\phi}) = \langle \psi | \phi \rangle = \sum_j \left[(a_jc_j + b_jd_j) + i(b_jc_j - a_jd_j)\right] where aj+ibja_j + ib_j and cj+idjc_j + id_j are amplitudes of ψ\ket{\psi} and ϕ\ket{\phi}, respectively. This fully preserves both amplitude and phase, crucial for quantum information processing, as verified in foundational and recent literature (Pecilli et al., 6 Feb 2026, Evans et al., 2024).

2. Complex Linear Combination of Unitaries (CLCUs) and Implementation

Classical self-attention can be interpreted as weighted summation over value vectors. In the quantum domain, QCSAM employs the Complex Linear Combination of Unitaries (CLCUs) framework, an extension of the standard LCU protocol to support arbitrary complex weights. For a set of unitaries UjU_j and complex coefficients αj=αjeiθj\alpha_j=|\alpha_j|e^{i\theta_j}: A=1Ωj=0N1αjUj,Ω=jαj2A = \frac{1}{\Omega'} \sum_{j=0}^{N-1} \alpha_j U_j,\qquad \Omega' = \sqrt{\sum_j|\alpha_j|^2} Preparation entails (i) state-preparation on an ancilla register, (ii) SELECT operations applying UjU_j controlled on ancilla, (iii) UNPREP transpose, and (iv) post-selection on the ancilla state. The resultant circuit efficiently implements the desired non-unitary operation, directly encoding complex-valued attention scores as amplitudes (Chen et al., 24 Mar 2025).

The resource estimate for NN unitaries with mm-qubit targets is:

  • Ancilla qubits: n=logNn = \lceil\log N\rceil
  • Total gate depth: O(N)O(N) per state-preparation/UNPREP layer
  • Overall success probability: O(1/Ω2)O(1/\Omega'^2), often requiring amplitude amplification (Chen et al., 24 Mar 2025)

CLCUs thus enables QCSAM layers to construct entangled, phase-encoded superpositions of value states weighted by complex attention coefficients.

3. Quantum Multi-Head Self-Attention Mechanism

QCSAM generalizes classical multi-head attention by instantiating HH independent quantum attention heads. Each head performs the following:

  • Trainable feature maps generate query Qk(h)\ket{Q^{(h)}_k} and key Kj(h)\ket{K^{(h)}_j} states per head hh.
  • Complex overlaps αjk(h)=Kj(h)Qk(h)\alpha^{(h)}_{jk} = \langle K^{(h)}_j | Q^{(h)}_k \rangle are estimated via quantum Hadamard tests, extracting both real and imaginary parts.
  • The CLCUs protocol linearly combines value states Vj(h)\ket{V^{(h)}_j} as: Sk(h)=j=0N1αjk(h)Vj(h)\ket{S^{(h)}_k} = \sum_{j=0}^{N-1} \alpha^{(h)}_{jk} \ket{V^{(h)}_j}
  • Outputs from all heads are aggregated via a second trainable CLCUs with complex weights γh\gamma_h: head_out=h=0H1γhG(h)\ket{\mathrm{head\_out}} = \sum_{h=0}^{H-1} \gamma_h \ket{G^{(h)}} This multi-head procedure allows QCSAM to capture diverse interference patterns and high-rank subspaces inaccessible to single-head or real-valued analogues (Chen et al., 24 Mar 2025, Evans et al., 2024).

4. Practical Implementation Details and Circuit Complexity

Quantum data are encoded using amplitude or angle encoding, often following PCA dimensionality reduction to match the register size (n=3,,8n=3,\dots,8 qubits). Quantum feature maps U(Θ)U(\Theta) are constructed with PQC blocks (single-qubit rotations and CNOTs). QCSAM's complexity scales as O(HNd)O(H N d) for HH heads, NN tokens, and dd encoding dimension per head, with ancillary overhead for CLCUs circuits.

Training involves quantum gradient estimation via the parameter-shift rule, and classical optimization of circuit and CLCUs parameters. CLCUs post-selection and amplitude amplification introduce inherent stochasticity but are not prohibitive for present NISQ hardware in low-qubit regimes (Chen et al., 24 Mar 2025, Liu et al., 2 Dec 2025, Guo et al., 25 Aug 2025).

5. Empirical Performance and Comparative Studies

QCSAM demonstrates statistically significant superiority over prior quantum self-attention models on imaging and sequence benchmarks. On MNIST and Fashion-MNIST, QCSAM achieves (with 4 qubits) 100%100\% and 99.2%99.2\% test accuracy, respectively, outperforming QKSAN (real-valued kernel, phase-insensitive), QSAN (CNOT-fusion, more qubits), and GQHAN (Grover oracle, lower accuracy) (Chen et al., 24 Mar 2025). When scaling from 3 to 8 qubits, QCSAM accuracy increases, with dual-head models outperforming single-head by approximately 2%2\% in difficult tasks. Ablation studies show 0.72%0.72\% and 0.54%0.54\% improvements over SWAP test/kernal-based methods (paired tt-test, p<0.05p<0.05), directly attributing gains to explicit complex-valued attention (Chen et al., 24 Mar 2025).

In alternative architectures such as QSAN (Shi et al., 2022), SASQuaTCh (Evans et al., 2024), and hardware-aware differentiable search (Liu et al., 2 Dec 2025), phase preservation and complex-valued overlaps are consistently identified as critical to quantum advantage, model expressiveness, and efficient learning.

6. Extensions, Applications, and Future Directions

QCSAM's formulation is directly extensible to sequence modeling, graph learning, and quantum natural language processing, where capturing complex amplitude and global phase correlations is essential. Theoretical prospects include further analysis of quantum-classical expressivity separation, the role of superposition and entanglement in attention landscapes, and algorithmic efficiency for large-scale tasks (Chen et al., 24 Mar 2025, Pecilli et al., 6 Feb 2026).

Next steps involve evaluating QCSAM in deeper multi-layer Transformer stacks, adapting to realistic quantum hardware constraints (connectivity, noise), and exploring broader task classes, such as quantum phase recognition (Chen et al., 31 Jan 2026), physical sequence prediction, and quantum-enhanced classical ML tasks.

7. Summary Table: Core Innovations of QCSAM vs. Prior Quantum Self-Attention Architectures

Aspect QCSAM (Chen et al., 24 Mar 2025) Prior Quantum Models
Similarity measure Complex-valued overlap KQ\langle K|Q\rangle Real-valued (SWAP, kernel)
Attention weight construction CLCUs with complex coefficients LCU (real weights); density-matrix; SWAP- or CNOT-based
Multi-head mechanism Fully quantum, parallel CLCUs Absent or simulated
Performance (MNIST 4q test acc) 100% 99%-100% (QSAN w/8q; QKSAN 4q: 99%)
Ablation gain (complex vs real) +0.54-0.72% (stat. sig.) None
Scalability & expressivity 3-8 qubits, richer phase interference Lower, phase omitted

QCSAM establishes a new reference architecture for quantum-native attention, integrating the mathematical richness of quantum information with the operational structure of self-attention, and delivering demonstrable performance gains within constraints of present and emerging quantum devices (Chen et al., 24 Mar 2025, Evans et al., 2024, Liu et al., 2 Dec 2025, Shi et al., 2022, Pecilli et al., 6 Feb 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quantum Complex-Valued Self-Attention Model (QCSAM).