Papers
Topics
Authors
Recent
Search
2000 character limit reached

Tensor-Network Schnorr’s Sieve Factoring

Updated 6 April 2026
  • The paper introduces the Tensor-Network Schnorr’s Sieve, recasting the closest vector problem as a spin-glass Hamiltonian to extract smooth-relation pairs for integer factorization.
  • It employs variational tree tensor networks to approximate low-energy states, demonstrating empirical polynomial scaling in factoring 100-bit RSA numbers.
  • Scalability challenges persist due to high-degree polynomial cost, prompting research into optimized TTN geometries and hybrid quantum-classical approaches for cryptanalysis.

Tensor-network Schnorr’s Sieve Factoring is a quantum-inspired classical algorithmic framework for integer factorization which recasts Schnorr’s lattice-based method as a combinatorial optimization problem, then solves it via tensor network techniques. This approach, referred to as the Tensor-Network Schnorr’s Sieving (TNSS) algorithm, encodes the closest vector problem (CVP) arising in integer factorization into a spin-glass Hamiltonian. The low-energy states of this Hamiltonian are optimized variationally using tree tensor networks (TTNs), and sampled to yield smooth-relation (sr) pairs necessary for the construction of congruences of squares in the factorization process. Demonstrations up to 100-bit RSA numbers, with empirical polynomial scaling in the bit-length \ell of semiprimes, attest to its potential as a high-dimensional, quantum-inspired attack vector against classical cryptographic standards (Tesoro et al., 2024).

1. Schnorr’s Lattice-Based Factoring Framework

Let N=pqN = p \cdot q be the semiprime to be factored, with bit-length =log2N+1\ell = \lfloor \log_2 N \rfloor + 1. The approach employs two prime bases: P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\} of size π1\pi_1, and P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\} of size π2\pi_2, with smoothness bound B2=pπ2B_2 = p_{\pi_2}. The key step seeks sr-pairs (u,uvN)(u, u - vN) such that uuvN(modN)u \equiv u-vN \pmod N and both N=pqN = p \cdot q0 and N=pqN = p \cdot q1 factor completely over N=pqN = p \cdot q2.

Schnorr’s sieve reframes the search for sr-pairs as a closest vector problem in an N=pqN = p \cdot q3-rank lattice N=pqN = p \cdot q4 generated by column vectors N=pqN = p \cdot q5. Explicitly,

N=pqN = p \cdot q6

and the CVP seeks N=pqN = p \cdot q7 for a target N=pqN = p \cdot q8.

Schnorr’s selection for N=pqN = p \cdot q9 and =log2N+1\ell = \lfloor \log_2 N \rfloor + 10 uses a random permutation =log2N+1\ell = \lfloor \log_2 N \rfloor + 11 and precision parameter =log2N+1\ell = \lfloor \log_2 N \rfloor + 12, yielding

=log2N+1\ell = \lfloor \log_2 N \rfloor + 13

=log2N+1\ell = \lfloor \log_2 N \rfloor + 14

The lattice point =log2N+1\ell = \lfloor \log_2 N \rfloor + 15 encodes exponents =log2N+1\ell = \lfloor \log_2 N \rfloor + 16 corresponding to =log2N+1\ell = \lfloor \log_2 N \rfloor + 17 (for =log2N+1\ell = \lfloor \log_2 N \rfloor + 18) and =log2N+1\ell = \lfloor \log_2 N \rfloor + 19 (for P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}0). If P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}1 is P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}2-smooth, a valid sr-pair is constructed. LLL reduction and Babai’s nearest-plane algorithm approximate the CVP solution with complexity P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}3.

2. Tensor Network Mapping and Spin-Glass Hamiltonian

The lattice CVP is recast as finding the ground-state of a spin-glass Hamiltonian acting on P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}4 qubits. Using P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}5, the cost function is

P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}6

where P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}7 is the Babai-classical nearest lattice vector, P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}8 are Gram–Schmidt orthogonal columns obtained via LLL, and P1={p1,,pπ1}P_1 = \{p_1, \dots, p_{\pi_1}\}9 are associated Gram–Schmidt coefficients and integer offsets.

Eigenstates π1\pi_10 encode lattice vectors π1\pi_11, whose energies π1\pi_12 favor low-energy configurations that are likely to yield valid sr-pairs.

The wavefunction π1\pi_13 is represented as a binary TTN, with physical indices π1\pi_14 and virtual indices of dimension π1\pi_15. Operators such as π1\pi_16 are diagonal in the computational basis and can be efficiently represented as matrix product operators (MPOs) with bond dimension π1\pi_17. Integer-smoothness constraints are imposed during post-processing on sampled bit-strings.

3. Algorithmic Implementation and Complexity

The TNSS proceeds via a variational search for the TTN ground-state:

  • Initialization of random TTN with bond dimension π1\pi_18.
  • Repeated sweeps update each tensor π1\pi_19 via diagonalization or SVD of a local effective Hamiltonian (P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}0), with per-sweep cost P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}1.
  • The OPES (Optimal Projected-Entangled Sampling) algorithm samples P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}2 bit-strings from the low-energy tail in P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}3 time, after perturbation with small random transverse fields.

For each sampled bit-string, the exponent vector is reconstructed (P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}4), P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}5 is formed, and P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}6-smoothness is tested via trial division.

The total complexity per CVP is:

P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}7

where empirically P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}8 for P2={p1,,pπ2}P_2 = \{p_1, \dots, p_{\pi_2}\}9, π2\pi_20, and π2\pi_21.

4. Empirical Performance and Scaling

Key empirical findings from benchmark studies include:

  • The average number of sr-pairs per CVP, π2\pi_22, obeys

π2\pi_23

with π2\pi_24, π2\pi_25, π2\pi_26, π2\pi_27.

  • Achieving π2\pi_28 for given π2\pi_29 requires

B2=pπ2B_2 = p_{\pi_2}0

implying polynomial scaling of B2=pπ2B_2 = p_{\pi_2}1 with B2=pπ2B_2 = p_{\pi_2}2.

  • Bond dimension follows B2=pπ2B_2 = p_{\pi_2}3.
  • For B2=pπ2B_2 = p_{\pi_2}4, B2=pπ2B_2 = p_{\pi_2}5, B2=pπ2B_2 = p_{\pi_2}6, B2=pπ2B_2 = p_{\pi_2}7, yielding B2=pπ2B_2 = p_{\pi_2}8 sr-pairs per CVP.
  • A 100-bit RSA challenge (B2=pπ2B_2 = p_{\pi_2}9, (u,uvN)(u, u - vN)0) was factored with (u,uvN)(u, u - vN)1, (u,uvN)(u, u - vN)2, (u,uvN)(u, u - vN)3 CVPs, and (u,uvN)(u, u - vN)4 ((u,uvN)(u, u - vN)5), extracting 11,278 sr-pairs in a few days on a single Xeon CPU. The result: (u,uvN)(u, u - vN)6, (u,uvN)(u, u - vN)7.

These results demonstrate that TNSS achieves empirical polynomial cost scaling in the bit-length (u,uvN)(u, u - vN)8 for up to 100-bit factorizations.

5. Scalability and Practical Limitations

Although TNSS exhibits polynomial scaling, the dominant complexity terms—(u,uvN)(u, u - vN)9 and uuvN(modN)u \equiv u-vN \pmod N0 (with uuvN(modN)u \equiv u-vN \pmod N1)—translate, via uuvN(modN)u \equiv u-vN \pmod N2 (with uuvN(modN)u \equiv u-vN \pmod N3), into polynomial degrees as high as uuvN(modN)u \equiv u-vN \pmod N4 for uuvN(modN)u \equiv u-vN \pmod N5. This high-order scaling leads to prohibitive constant factors for practically breaking cryptographically relevant moduli such as RSA-2048 (uuvN(modN)u \equiv u-vN \pmod N6).

Prospective optimizations include:

  • Improved TTN geometries or introduction of deeper tree structures to further reduce uuvN(modN)u \equiv u-vN \pmod N7.
  • Enhanced sampling strategies for lowering uuvN(modN)u \equiv u-vN \pmod N8.
  • Parallelization of CVP instances and TTN sweeps on large-scale HPC clusters.
  • Hybrid approaches leveraging quantum subroutines to propose tensor updates.

6. Cryptographic Implications

Currently, the General Number Field Sieve (GNFS), with sub-exponential complexity uuvN(modN)u \equiv u-vN \pmod N9, remains the most effective classical factoring approach for standard RSA key sizes. TNSS does not yet constitute a practical threat to large (N=pqN = p \cdot q00) moduli. However, if the observed polynomial cost scaling in TNSS extends to much larger sizes, classical tensor-network algorithms could undermine RSA security.

This situation accentuates the urgency of adopting post-quantum cryptography and quantum key distribution, as classical “quantum-inspired” algorithms increasingly erode the computational intractability assumptions underlying modern public-key infrastructures.

7. Summary

Tensor-Network Schnorr’s Sieve recasts lattice factorization attacks as Hamiltonian optimization, leveraging variational TTN ground-state search and OPES sampling to efficiently extract smooth relations. The demonstrated factorization of 100-bit RSA numbers evidences the practical viability of this methodology for moderate parameters. Its polynomial complexity with respect to bit-length, albeit with a high polynomial degree, delineates both a conceptual advance in classical cryptanalytic techniques and key challenges inhibiting immediate scalability. As tensor-network and lattice algorithms continue to mature, the boundaries of classical cryptanalysis against public-key standards are expected to be further tested (Tesoro et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Tensor-network Schnorr’s Sieve Factoring.