Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 36 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 66 tok/s Pro
Kimi K2 170 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

eXpressive QAOA (XQAOA) Algorithm

Updated 22 September 2025
  • eXpressive QAOA (XQAOA) is an overparameterized Quantum Approximate Optimization Algorithm that uses additional classical parameters and a Pauli-Y mixer to enhance expressivity and address optimization challenges.
  • It significantly improves scalability and solution quality in combinatorial optimization problems such as MaxCut and the Binary Paint Shop Problem, often surpassing classical heuristics.
  • Analytical and empirical studies demonstrate that XQAOA maintains low circuit depth, robust convergence, and efficient hardware mapping, making it ideal for NISQ-era devices.

eXpressive QAOA (XQAOA) denotes an overparameterized family of Quantum Approximate Optimization Algorithm (QAOA) variants designed to improve approximation quality, trainability, and scalability—especially at low circuit depths—by introducing additional classical parameters and enhanced ansatz constructions. XQAOA generalizes the traditional QAOA circuit by allowing per-qubit or per-term control in both the problem and mixer unitaries, commonly adding a Pauli-Y component to the mixer Hamiltonian, thereby significantly increasing the expressive capacity of the quantum state prepared at each depth. This approach targets practical combinatorial optimization problems such as MaxCut, coloring, and the Binary Paint Shop Problem (BPSP), demonstrating superior performance in regimes relevant to current noisy intermediate-scale quantum (NISQ) devices (Vijendran et al., 2023, Vijendran et al., 18 Sep 2025). The expressivity of the ansatz is central to overcoming reachability and barren plateau issues in standard QAOA and enables robustness in optimization, even as problem sizes grow.

1. Formal Definition and Structure of XQAOA

The core ansatz of XQAOA at circuit depth p=1p=1 is

U(α,β,γ)=U(A,α)U(B,β)U(C,γ)U(\alpha, \beta, \gamma) = U(\mathcal{A}, \alpha) \, U(\mathcal{B}, \beta) \, U(\mathcal{C}, \gamma)

with

  • U(C,γ)=exp(iγC)U(\mathcal{C}, \gamma) = \exp(-i \gamma \mathcal{C}) (problem unitary, with a possibly vector-valued γ\gamma)
  • U(B,β)=j=1nexp(iβjXj)U(\mathcal{B}, \beta) = \prod_{j=1}^n \exp(-i \beta_j X_j) (standard transverse-field, per-qubit mixer)
  • U(A,α)=j=1nexp(iαjYj)U(\mathcal{A}, \alpha) = \prod_{j=1}^n \exp(-i \alpha_j Y_j) (Pauli-Y mixer per qubit)

The inclusion of the independent Pauli-Y angles (αj\alpha_j) allows for arbitrary single-qubit rotations, thereby yielding an ansatz dense in the set of product states (Vijendran et al., 2023). This can be extended to multiple layers p>1p > 1 and to "multi-angle" cases, where parameters are defined per edge or per term in the cost Hamiltonian (as in MA-QAOA (Herrman et al., 2021)).

In XQAOA, the total parameter set scales as O(n+m)O(n + m) for nn qubits and mm edges (or clauses), in contrast to the standard QAOA's O(p)O(p) parameters. The circuit remains implementable at shallow depth, aligning with NISQ device limitations.

2. Theoretical Properties and Analytical Results

XQAOA enables closed-form analysis for important problem classes. For MaxCut, the expected cost at p=1p=1 can be analytically expressed in terms of the various mixer and cost parameters (Vijendran et al., 2023, Ng et al., 14 Nov 2024): Cuv=wuv2[1auv(XX)(β)ξuv(XX)(γ)auv(YY)(β)ξuv(YY)(γ)]\langle C_{uv} \rangle = \frac{w_{uv}}{2} \bigg[1 - a_{uv}^{(XX)}(\beta)\xi_{uv}^{(XX)}(\gamma) - a_{uv}^{(YY)}(\beta)\xi_{uv}^{(YY)}(\gamma) - \dots \bigg] where auv()a_{uv}^{(\cdot\cdot)}, ξuv()\xi_{uv}^{(\cdot\cdot)} are trigonometric functions of the variational parameters and encoded local graph structure. XQAOA (with product mixers) only couples to local graph neighborhoods (e.g., triangles), in contrast to Grover-type mixers, which incorporate non-local (cycle) structure (Ng et al., 14 Nov 2024). This locality assures scalable analytical computation and clear intuition regarding expressibility and optimization landscape.

A major analytic insight is that the extra parameters in the XY (or Y) mixer can, for certain infinite families of graphs (e.g., triangle-free graphs where every edge has odd degree), enable exact solution of MaxCut in a single layer, while traditional QAOA asymptotically saturates below optimal (Vijendran et al., 2023).

3. Empirical Performance and Benchmarking

Empirical studies on regular graphs with up to 256 nodes (degrees 3-10) show that XQAOA at depth p=1p=1 consistently outperforms standard QAOA, MA-QAOA, and classical-relaxed algorithms—often even surpassing the Goemans-Williamson SDP algorithm for graph degrees above 4 (Vijendran et al., 2023). Similar superiority holds when XQAOA is deployed for the Binary Paint Shop Problem (BPSP): in large instances (272^7 to 2122^{12} cars), p=1p=1 XQAOA achieves a robust average paint swap ratio of $0.357$, surpassing recursive QAOA (RQAOA) and all known classical heuristics, including those conjectured to be asymptotically optimal (Vijendran et al., 18 Sep 2025). This robustness is maintained with increasing problem size, a nontrivial property not shared by competing approaches.

For more general NP-hard problems (e.g., MaxCut on non-regular or weighted graphs), XQAOA’s expanded parameter set and use of product mixers with per-term or per-qubit angles allows high-quality solutions while maintaining low circuit depth and resilience against barren plateau phenomena (Vijendran et al., 2023, Herrman et al., 2021).

4. Circuit Implementation: Constraints and Mixers

A distinguishing feature of XQAOA is the capability to enforce hard constraints natively via tailored mixers (see also XY-mixers (Wang et al., 2019)). For problems with one-hot (or kk-hot) constraints (as in graph coloring or assignment problems), mixers constructed from XY Hamiltonians preserve the feasible subspace exactly; i.e., they commute with total occupation constraints and prevent leakage outside the valid set.

The general mixer form for vertex vv, with κ\kappa possible "colors," is: HXY,v=12c,cK(Xv,cXv,c+Yv,cYv,c)H_{XY,v} = \frac{1}{2} \sum_{c, c' \in K} (X_{v,c}X_{v,c'} + Y_{v,c}Y_{v,c'}) where suitable choices for KK yield “complete graph” or “ring” mixers (Wang et al., 2019). Implementation can exploit commutativity for O(κ)O(\kappa) or O(logκ)O(\log \kappa) circuit depth, depending on hardware connectivity and whether fast fermionic Fourier techniques are used. This approach outperforms penalty-term-based constraint handling in traditional QAOA.

5. Trainability, Loss Landscapes, and Optimization

The optimization landscape of XQAOA is empirically and analytically found to be benign: local minima are typically close to global optima, and the addition of extra variational degrees of freedom reshapes loss surfaces, mitigating barren plateaus and spurious traps (Vijendran et al., 2023). For MaxCut, small-scale simulations up to p=5p=5 showed that almost all parameter initializations in XQAOA lead to high-quality outcomes, contrasting with the sensitivity and variability observed in QAOA and MA-QAOA.

The expressive power of XQAOA enables high-probability preparation of low-energy states even when using stochastic classical optimizers and under realistic measurement shot constraints—dual annealing and natural evolution strategies have notably enabled successful optimization using as little as one measurement per energy estimate (Polloreno et al., 2022).

6. Scalability, Compilation, and Hardware Adaptation

XQAOA’s utility hinges on scalable circuit depth and hardware efficiency. Compilers that exploit the commutativity of mixer and cost unitaries (e.g., Maaps strategies) enable QAOA and XQAOA circuits to be mapped onto hardware graphs (such as Google Sycamore and IBM’s heavy-hex lattice) in linear time, ensuring circuit depth of O(n)O(n) and yielding substantial (up to 3.8×3.8\times) depth reduction and 17%17\% lower gate count, resulting in 18×18\times higher estimated success probabilities on noisy hardware (Jin et al., 2021).

These capabilities remain robust as XQAOA is extended to problems with large numbers of qubits, including cases where divide-and-conquer strategies (such as QAOA-in-QAOA) are employed for large-scale graph problems, and sub-instances are solved using XQAOA as a module (Zhou et al., 2022).

7. Practical and Theoretical Implications

XQAOA provides a generalized, expressive framework unifying and subsuming many QAOA variants, including the multi-angle and adaptive-bias formulations, and is amenable to further augmentation by “problem-independent” multiparameter layers (as in QAOA+) (Chalupnik et al., 2022). It enables natural incorporation of constraint logic in encoding (e.g., ESOP pipelines for MIS (Brunet et al., 29 Aug 2025)), direct cost-phase encoding for compact and unambiguous circuit representation (as for TSP (Garhofer et al., 10 Dec 2024)), and supports modular hierarchical optimization structures.

Analytical work confirms that the product mixer structure of XQAOA aligns the cost expectation with local graph statistics, while more entangling or non-local mixers may further extend expressiveness to global correlations and cycles—a topic of ongoing research (Ng et al., 14 Nov 2024).

The strategic overparameterization of XQAOA enhances expressivity, convergence, and practical quantum advantage for a broad class of combinatorial optimization problems, positioning it as a fundamental algorithmic paradigm for NISQ-era quantum optimization.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to eXpressive QAOA (XQAOA).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube