Papers
Topics
Authors
Recent
2000 character limit reached

Noisy Intermediate-Scale Quantum (NISQ) Systems

Updated 26 November 2025
  • NISQ systems are gate-based quantum processors with O(10–1000) qubits that operate below full error-correction thresholds, defining a regime of near-term, noisy quantum computation.
  • They leverage shallow, hybrid quantum–classical algorithms like VQE and QAOA to tackle simulation, optimization, and machine learning tasks despite significant noise.
  • Custom compilation, scheduling, and error-mitigation techniques are essential for optimizing circuit depth and enhancing device stability on current NISQ hardware.

Noisy Intermediate-Scale Quantum (NISQ) Systems are gate-based quantum processors comprising O(10¹–10³) qubits operating far below the error-correction threshold, with gate error rates in the 10⁻³–10⁻² range, moderate qubit connectivity, and coherence times that cap allowable circuit depths to the order of O(10²–10³) gates. They occupy the regime between proof-of-principle “few-qubit” demonstrators and fully fault-tolerant quantum computers, defining both the technological and algorithmic frontier of near-term quantum computation. NISQ devices lack full error correction and are characterized by significant temporal and spatial fluctuations in hardware noise, resulting in unpredictable device stability and algorithmic reproducibility. Despite these constraints, they enable early explorations in quantum simulation, optimization, sampling, quantum machine learning, and as testbeds for quantum error mitigation, compilation strategies, and device-characterization protocols.

1. Physical Characteristics and Operating Regime

NISQ systems are defined by (i) qubit counts in the range 10–1000, (ii) two-qubit gate fidelities typically ≳98–99.9 %, (iii) single- and two-qubit coherence times (T₁,T₂) of order 10–100 μs (superconductors) or ≳1 s (trapped ions), (iv) shallow circuit depth limits imposed by error accumulation and decoherence, and (v) physical connectivity (often 2D nearest-neighbor or all-to-all for ions) that constrains qubit mapping and routing strategies (Preskill, 2018, Ezratty, 2023). The total number of gates that may be executed before decoherence dominates is determined by N·d·ε ≪ 1, with N the qubit count, d the circuit depth, and ε the two-qubit error rate (Ezratty, 2023). Table 1 organizes typical hardware performance metrics:

Platform Qubit Count 2Q Fidelity (%) Gate Time T₁/T₂
IBM Eagle/Egret (2020) 27–33 99.3–99.7 100 ns 100 μs
Google Sycamore (2022) 72 98.6 20 ns 100 μs
IonQ (trapped-ion) 11 99.8–99.9 50–200 μs 1–10 s
Quantinuum H1 20 99.9 100 μs 1–10 s
Pasqal (neutral-atom) 100 97–99 1 ms 0.1–1 s

NISQ processors inherently lack logical-qubit encoding or surface-code stabilization, so error correction is not feasible except for limited repetition codes or small code patches (Preskill, 2018, Ezratty, 2023). Device noise sources include gate infidelities, readout errors (1–40 %), non-uniform T₁/T₂, crosstalk, and drift, all of which fluctuate over time and qubits (Dasgupta et al., 2021, Dasgupta et al., 2023).

2. Noise, Stability, and Device Variability

Quantum operations on NISQ devices must contend with both static and highly time-dependent noise sources—decoherence, leakage, crosstalk, SPAM errors—whose rates can drift substantially (up to 0.4–1.0 Hellinger distance) within a month or across qubit registers (Dasgupta et al., 2021, Dasgupta et al., 2023). Key metrics for device performance and stability include:

  • Initialization fidelity FI=1eRF_I = 1 - e_R, where eRe_R is the readout error for 0n|0\rangle^{\otimes n}.
  • Gate fidelity FG=1ϵGF_G = 1 - \epsilon_G, with ϵG\epsilon_G extracted from randomized benchmarking.
  • Duty cycle τ=T2/TG\tau = T_2/T_G for a typical gate, e.g. CNOT.
  • Addressability FA=1η(X,Y)F_A = 1 - \eta(X,Y), with η\eta the normalized mutual information between two qubits' measurement outcomes (Dasgupta et al., 2021).

Device instability is assessed via Hellinger distance between the distributions of such parameters over time and space. For instance, month-to-month FIF_I and FGF_G can show Hellinger distances exceeding 0.2–0.5, and spatial inhomogeneity across registers spans the entire range (0–1), with implications for the reproducibility of any quantum computation (Dasgupta et al., 2021, Dasgupta et al., 2023). Device reliability for circuit outcomes can be quantitatively bounded in terms of the Hellinger distance between noise distributions, leading to strict thresholds on how frequently hardware must be recalibrated to guarantee statistical reproducibility (Dasgupta et al., 2023).

3. Compilation, Scheduling, and Error-Aware Optimization

Resource limitations in NISQ systems, particularly bounded coherence and variable error rates, necessitate custom compilation and mapping strategies. Constraint-based compilers model the placement of program qubits onto hardware qubits, gate scheduling, and routing, incorporating device-dependent metrics (gate times, error rates, connectivity) as optimization constraints. Formally:

  • For each program qubit pp and hardware site qq, mp,q{0,1}m_{p,q} \in \{0,1\} encodes placement.
  • For each gate gg, variables include start time τg\tau_g, duration δg\delta_g, subject to dependency, coherence, and exclusivity constraints (e.g., τg+δgT2(q)\tau_g + \delta_g \leq T_{2}(q) for each qubit qq on which gg acts).
  • Routing of non-adjacent CNOTs is optimized via either rectangle reservation or 1-bend-path policies, enforcing spatio-temporal exclusivity (Murali et al., 2019, Murali et al., 2019).

Dynamic, calibration-aware compilers retrieve daily device parameters and adapt qubit placement/routing to avoid defective or high-error qubits, yielding 2.9×–18× increased program success rates compared to standard transpilers (e.g., IBM Qiskit) and up to 6× runtime reductions (Murali et al., 2019). Heuristic mapping and greedy scheduling scale to 256 qubits, producing schedules that respect all coherence and hardware constraints at marginal overhead (<2× makespan) over optimal but computationally intractable SMT formulations (Murali et al., 2019).

4. Quantum Algorithms and NISQ-Adaptive Strategies

Due to circuit-depth constraints and noise, NISQ-suitable algorithms are typically shallow, hybrid quantum–classical variational procedures. Two paradigmatic classes dominate:

  • Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA): Utilize parameterized shallow circuits (depth ddmaxd \ll d_{\mathrm{max}}) with iterative classical optimization in a hybrid loop. Resource estimates for VQE scale as SO(N4/ϵ2)S \sim O(N^4/\epsilon^2) shots per iteration, with required gate fidelities ϵ<104\epsilon < 10^{-4}10610^{-6} for chemical accuracy (Preskill, 2018, Ezratty, 2023). QAOA for MaxCut-like problems demands N100N \gg 100 for quantum speedup, with practical value emerging only at circuit sizes presently inaccessible to NISQ (Ezratty, 2023).
  • Quantum Machine Learning (QML) and Quantum Neural Networks: Depth-limited dissipative architectures (e.g., DQNN) that trade increased qubit count for reduced gate depth exhibit enhanced noise tolerance compared to deeper QAOA-style circuits (Beer et al., 2021).

Algorithmic performance on NISQ hardware is fundamentally limited by total error accumulation, with the maximum tolerable per-gate error scaling as ϵtol(C)γC/C\epsilon_{\mathrm{tol}}(C) \lesssim \gamma_C/|C|, with C|C| the number of gates and γC\gamma_C a model-dependent constant rarely exceeding 2.5 (Brandhofer et al., 2023). Within this framework, only very shallow and robust circuits achieve substantial success probability on current NISQ platforms.

Error-mitigation techniques—zero-noise extrapolation, probabilistic error cancellation, virtual distillation, dynamical decoupling, randomized compiling—can suppress effective error rates but incur exponential overhead in circuit depth or qubit count in all but the shallowest circuits (Ezratty, 2023).

5. Simulation of Many-Body and Open Quantum Systems

NISQ hardware enables the digital simulation of closed and open quantum-system dynamics, including Lindblad master equations, using either Trotterization strategies or recently developed trotterless Kraus series representations that yield constant-depth circuits for a broad class of dissipative systems (Burdine et al., 2024). IBM Q devices have demonstrated digital implementation of both unital (pure dephasing, depolarizing channels) and non-unital (amplitude damping) dynamics, simulation of Markovian and non-Markovian open-system processes, collisional models, revival of quantum channel capacity, and extractable work in non-Markovian environments (García-Pérez et al., 2019).

For tailored chemical simulation, protocols combining driven-similarity renormalization group (DSRG) effective Hamiltonians, correlation-energy-based active-orbital selection, and noise-resilient wavefunction ansätze enable accurate reaction modeling using NISQ-, resource-minimal schemes, with demonstrated accuracy within chemical precision on cloud hardware—even for systems with up to tens of atoms (Zeng et al., 2024).

Hybrid classical–quantum algorithms such as truncated-Taylor quantum simulators remove the "barren-plateau" obstacle by relegating all parameter optimization to the classical domain after batch overlap measurements, providing resource-efficient Hamiltonian simulation strategies for the NISQ era (Lau et al., 2021).

Simulation frameworks such as SANQ enable end-to-end cycle-accurate modeling of both quantum processor noise and classical control hardware, allowing "what-if" architectural exploration and compiler–hardware co-design prior to device fabrication (Li et al., 2019).

6. Complexity-Theoretic Perspective and the Computational Power of NISQ

The complexity class NISQ\textsf{NISQ} models the problems solvable by classical computers using oracles provided by noisy, non-error-corrected quantum devices. Formally, an nn-qubit NISQ device evolves the all-zero state using a bounded-depth sequence of two-qubit gates, with each layer followed by independent depolarizing noise of strength λ\lambda, and concludes with projective measurement (Chen et al., 2022). The central relationships are:

  • BPPNISQBQPBPP \subsetneq \textsf{NISQ} \subsetneq BQP.

Key results:

  • For oracle-relativized problems, NISQ achieves super-polynomial speedups (e.g., robust Simon's problem), but is exponentially weaker than BQPBQP on others.
  • For unstructured search, NISQ cannot attain Grover's quadratic speedup; noise forces query complexity back to the classical regime.
  • For structured problems such as Bernstein–Vazirani, NISQ can achieve logarithmic query complexity in the presence of constant-rate local noise.
  • Any quantum state learning (shadow tomography) protocol on NISQ hardware must use exponentially many copies in nn, losing exponential quantum advantage at any nonzero noise rate (Chen et al., 2022).

The implication is that while NISQ offers advantages on certain algebraic tasks, the generic quantum speedups for unstructured search and exponential acceleration for tomography collapse in the presence of non-negligible noise. The practical window for NISQ advantage is thus algorithm- and noise-model specific.

7. Practical Applications, Limitations, and Outlook

Experimental demonstrations on NISQ hardware to date include K-means clustering via depth-minimized quantum interference schemes (Khan et al., 2019), Floquet dynamics and discrete time crystal realization in many-body circuits (Anand et al., 2021, Ippoliti et al., 2020), and scalable simulative frameworks for quantum networks that leverage native hardware noise as a simulation resource rather than an obstacle (Riera-Sàbat et al., 10 Jun 2025).

Common bottlenecks observed across all NISQ use cases are:

  • Insufficient qubit count versus required error rate for realistic circuits,
  • Exponential measurement overhead and classical post-processing for VQE/QAOA,
  • Ease of classical emulation for shallow/noisy circuits,
  • Lack of robust commercial applications due to the fragility and non-stationarity of current hardware (Ezratty, 2023).

The future of NISQ is likely bifurcated: in the near term, a narrow "NISQ speedup window" is projected for specialized analog or hybrid quantum computing tasks (sampling, low-rank optimization, tightly tailored quantum chemistry) provided error rates can be pushed below 10⁻⁴–10⁻⁵ on ~100 qubits. In parallel, the development of fully error-corrected, fault-tolerant quantum computers (FTQC)—requiring O(10⁴–10⁹) physical qubits—will ultimately subsume universal quantum computation. Both NISQ and FTQC will require co-designed toolchains, hardware–aware compilation strategies, and real-time noise-model integration into all stages of quantum program synthesis and execution (Ezratty, 2023).

In summation, NISQ systems furnish a unique experimental and algorithmic landscape for noise-robust quantum information processing and simulation, but their computational utility is stringently bounded by noise-induced limitations on scale, reproducibility, and algorithmic success—necessitating continual advances in device characterization, compilation, algorithm design, and error mitigation for the regime to realize its potential.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Noisy Intermediate-Scale Quantum (NISQ) Systems.