Papers
Topics
Authors
Recent
Search
2000 character limit reached

MPS-Encoded Functions: Theory and Applications

Updated 5 December 2025
  • MPS-encoded functions are tensor network representations of classical functions that map amplitudes to quantum states with controlled entanglement.
  • Algorithms like IMPS, MPD, and TCI enable efficient state preparation by reducing circuit depth and gate counts through adaptive SVD and disentangler methods.
  • Applications span quantum simulation, finance, and image encoding, where hardware-adaptive MPS techniques ensure high fidelity and scalable resource usage.

MPS-Encoded Functions

Matrix Product State (MPS)-encoded functions are function representations structured as quantum states whose amplitudes—or, equivalently, data—are organized via MPS tensor network decompositions. This framework is central for efficiently mapping classical functions, probability distributions, and even structured datasets onto quantum states, an essential subroutine in many practical quantum algorithms. The development, algorithmic refinement, and performance of MPS-encoded function methodologies have established this approach as the leading paradigm for state preparation with rigorous control over resource scaling, circuit depth, and entanglement entropy.

1. Formalism of MPS-Encoded Functions

Given a classical function f(x)f(x) defined on a discrete 2n2^n-point grid, its amplitude-encoded quantum state is

∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,

with ZZ the normalization. The MPS ansatz recasts ∣ψf⟩|\psi_f\rangle as

∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,

where Aik(k)A^{(k)}_{i_k} are matrices of size χk−1×χk\chi_{k-1} \times \chi_k and χk\chi_k is the bond dimension across the kk-th cut (2n2^n0). The expressivity and manipulability of the encoding are governed by the maximum bond dimension 2n2^n1 (Wang et al., 18 Aug 2025, Green et al., 23 Feb 2025).

2. Entanglement Scaling and Bond Dimension Control

For 2n2^n2, the decay of Schmidt coefficients across an MPS bond is rigorously given by

2n2^n3

where 2n2^n4 depends on the 2n2^n5-norm of 2n2^n6 and its cross-terms (Bohun et al., 2024). The subleading singular values decay as 2n2^n7, yielding entropy 2n2^n8 at large 2n2^n9. Thus, for smooth ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,0, only a small ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,1 is required for high-fidelity approximation: ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,2 suffices asymptotically, independently of grid size. For functions with ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,3 derivatives, ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,4 on ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,5 initial bonds, then ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,6 elsewhere.

For non-smooth, localized, or heavy-tailed functions, the universal decay transitions at a problem-dependent scale. For instance, exponentially localized ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,7 exhibits super-exponential decay of entanglement; power-law tails cause slower, polynomial decay, requiring higher ranks before the universal regime is reached (Bohun et al., 2024).

3. MPS-Based State Preparation Algorithms

Efficient preparation of MPS-encoded states has advanced via the following leading algorithms:

Improved MPS (IMPS) and Disentangler Methods

The improved MPS protocol extracts shallow quantum circuits by recursively applying two-qubit disentangler gates that leverage SVD decompositions over pairs of qubits. Given a function class and its MPS representation, the algorithm proceeds by:

  • Forming ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,8 matrices over disjoint pairs of qubits and applying SVD.
  • Utilizing a parallel contraction strategy, e.g., on a tree or hypercube, to exponentially reduce circuit depth to ∣ψf⟩=1Z∑x=02n−1f(x) ∣x⟩,|\psi_f\rangle = \frac{1}{\sqrt{Z}} \sum_{x=0}^{2^n-1} f(x)\,|x\rangle,9 on all-to-all topologies or ZZ0 on planar grids.
  • Exploiting a structural reduction to 2-CNOT two-qubit unitaries per disentangler, achieving a 33% CNOT count savings (e.g., ZZ1 two-qubit gates for ZZ2 qubits) (Wang et al., 18 Aug 2025).

Matrix Product Disentangler (MPD) and Tensor Network Optimization (TNO)

The MPD algorithm constructs an ZZ3-depth circuit with no ancilla overhead via:

  • Truncated SVD on each ZZ4-qubit cut, projecting to ZZ5 MPS and identifying a sequence of ZZ6-two-qubit gates per layer.
  • Layered application and inversion of these circuits, iterating ZZ7–ZZ8 times.
  • Optionally adding TNO (e.g., via L-BFGS-B) to further boost fidelity (Green et al., 23 Feb 2025).

For low-degree piecewise polynomials (ZZ9), the exact MPS construction requires only ∣ψf⟩|\psi_f\rangle0, allowing ∣ψf⟩|\psi_f\rangle1 fidelity at ∣ψf⟩|\psi_f\rangle2–∣ψf⟩|\psi_f\rangle3, with gate counts scaling as ∣ψf⟩|\psi_f\rangle4.

Tensor Cross Interpolation (TCI)

TCI provides an oracle-based method for building MPS representations by adaptive sampling, obviating the need to store the full ∣ψf⟩|\psi_f\rangle5 vector. The core steps involve constructing interpolation matrices, applying the max-volume rows/columns principle, and extracting TT-cores. TCI achieves ∣ψf⟩|\psi_f\rangle6 complexity in both queries and storage, with uniform error ∣ψf⟩|\psi_f\rangle7 by construction (Bohun et al., 2024).

4. Function Class Examples, Explicit Constructions, and Circuit Depth

Classes of ∣ψf⟩|\psi_f\rangle8 with bounded and/or small MPS rank:

  • Gaussian ∣ψf⟩|\psi_f\rangle9: ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,0, circuit depth ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,1, ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,2 ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,3 rotations in parallel.
  • Low-degree polynomial ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,4: ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,5; linear ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,6 has ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,7, requiring depth ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,8, ∣ψ⟩=∑i1,…,in∈{0,1}Tr[Ai1(1)Ai2(2)⋯Ain(n)] ∣i1⋯in⟩,|\psi\rangle = \sum_{i_1,\ldots,i_n\in\{0,1\}} \mathrm{Tr}[A^{(1)}_{i_1}A^{(2)}_{i_2}\cdots A^{(n)}_{i_n}]\,|i_1\cdots i_n\rangle,9 two-qubit gates.
  • Log-normal, financial payoffs: often factorizable, achieving Aik(k)A^{(k)}_{i_k}0 or products thereof.
  • Heavy-tailed, Lévy-stable distributions: larger Aik(k)A^{(k)}_{i_k}1 preludes the universal regime, but still Aik(k)A^{(k)}_{i_k}2 is sufficient for high-dimensional cases (Bohun et al., 2024, Wang et al., 18 Aug 2025).

A table summarizing typical bond dimensions and circuit resources:

Function Class MPS Bond Dimension (Aik(k)A^{(k)}_{i_k}3) Circuit Depth / Gates
Gaussian 1 1 (all Aik(k)A^{(k)}_{i_k}4 in parallel)
Linear (Aik(k)A^{(k)}_{i_k}5) 2 Aik(k)A^{(k)}_{i_k}6, Aik(k)A^{(k)}_{i_k}7 2QG
Quadratic (Aik(k)A^{(k)}_{i_k}8=2) or Aik(k)A^{(k)}_{i_k}9-poly χk−1×χk\chi_{k-1} \times \chi_k0 χk−1×χk\chi_{k-1} \times \chi_k1, χk−1×χk\chi_{k-1} \times \chi_k2
Heavy-tailed (Lévy) 3–4 χk−1×χk\chi_{k-1} \times \chi_k3

2QG: two-qubit gates

5. Numerical Performance, Scaling, and Hardware Considerations

Rigorous numerical benchmarks validate that IMPS/MPD circuits routinely achieve infidelities χk−1×χk\chi_{k-1} \times \chi_k4 using linear (or better) depth and sublinear gate counts for practical function classes:

  • At χk−1×χk\chi_{k-1} \times \chi_k5, IMPS hypercube scheduling reduces χk−1×χk\chi_{k-1} \times \chi_k6-depth from χk−1×χk\chi_{k-1} \times \chi_k7 (chain) to χk−1×χk\chi_{k-1} \times \chi_k8, with infidelity improvements by 1–2 orders of magnitude at equal depth.
  • The optimized 2-CNOT decomposition matches 3-CNOT variants in fidelity, at a 33% reduction in two-qubit gate count (Wang et al., 18 Aug 2025).
  • On 2D grids (e.g., χk−1×χk\chi_{k-1} \times \chi_k9), depth contracted from 11 to 5, yielding higher fidelity at lower hardware overhead.
  • Large-scale experiments up to χk\chi_k0 qubits (using IBM Q devices) confirmed χk\chi_k1 with χk\chi_k2–2 layers, demonstrating viability even under device noise for practical χk\chi_k3 (Bohun et al., 2024).

For piecewise polynomials (χk\chi_k4, χk\chi_k5), exact MPS or truncated models achieve fidelities χk\chi_k6 for χk\chi_k7 up to 20 without ancillary qubits (Green et al., 23 Feb 2025).

6. Applications and Impact in Quantum and Classical Computation

MPS-encoded functions underpin numerous quantum algorithms that require efficiently loaded classical data, especially in quantum finance, simulation, and linear systems. Notably:

  • PDE solution via quantum-inspired MPS representations surpasses full-vector methods in both time and memory, especially with DMRG and Arnoldi global solvers, achieving exponential resource savings (García-Molina et al., 2023).
  • In image encoding, MPS approximations of discrete wavelet transforms allow for preparation of high-resolution (χk\chi_k8) images (e.g., ChestMNIST, χk\chi_k9) with circuit depth kk0, fidelity exceeding kk1 (Green et al., 23 Feb 2025).
  • Universal, smooth, and localized function classes mapped to amplitude-encoded quantum states with systematically controllable error.

7. Practical Guidelines and Theoretical Implications

The key principles for practice and design are:

  • Small bond dimension is guaranteed by the entanglement area law for smooth and localized kk2, allowing shallow circuits for relevant classes.
  • Hardware-adaptivity: IMPS and variants can be scheduled to match device connectivity, achieving optimal unitarity depth and parallelism.
  • Error control is achieved by direct manipulation of the MPS bond dimension and Schmidt spectrum truncation, with variational bounds ensuring target fidelity.

A plausible implication is that the MPS encoding framework, when combined with hardware parity and adaptive optimization (TNO, TCI), represents the most scalable method for quantum state preparation with prescribed fidelity for smooth and structured classical data.

References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MPS-Encoded Functions.