Papers
Topics
Authors
Recent
Search
2000 character limit reached

Module-LWE in Post-Quantum Cryptography

Updated 23 February 2026
  • Module-LWE is a generalization of LWE that leverages module arithmetic over cyclotomic rings to interpolate between vector and ring structures.
  • It underpins secure key-encapsulation and public-key encryption protocols, exemplified by NIST’s CRYSTALS-Kyber, ensuring quantum resilience.
  • Efficient implementations use number-theoretic transforms and robust reconciliation techniques, with benchmarks validating security against modern ML attacks.

Module-Learning-With-Errors (Module-LWE) is a generalization of the Learning-With-Errors (LWE) problem, which forms the hardness foundation for several post-quantum cryptographic systems, most notably NIST's standardized CRYSTALS-Kyber. The security of Module-LWE underpins both public-key encryption and key-encapsulation protocols designed to remain secure even against quantum adversaries. Module-LWE interpolates between LWE and Ring-LWE by leveraging module (as opposed to vector) structure over cyclotomic rings, balancing tight security reductions with efficient implementations.

1. Formal Definition, Algebraic Structure, and Parameterization

Let nn be a power of two, %%%%1%%%% an integer modulus, and Rq=Zq[x]/(xn+1)R_q = \mathbb{Z}_q[x]/(x^n + 1) the nn-th cyclotomic ring modulo qq. The module rank k1k\geq 1 and secret rank k\ell \leq k are chosen as parameters. In the principal instantiation, consider:

  • Secrets: sRqs \in R_q^\ell, typically drawn from a "small" distribution (centered binomial Ψη\Psi_\eta, discrete Gaussian, or sparse binary).
  • Public samples: (a1,,ak)Rq(a_1, \dots, a_k) \in R_q^\ell
  • Error vector: e=(e1,,ek)Rqke = (e_1,\dots,e_k)\in R_q^k, drawn from a suitable "noise" distribution.

The ii-th Module-LWE sample is

bi=aiTs+eiRq.b_i = a_i^T s + e_i \in R_q.

The Search-Module-LWE Problem MLWE(n,,k,q,χ)\text{MLWE}(n,\ell,k,q,\chi) is: given sample pairs {(ai,bi)}i=1k\{(a_i, b_i)\}_{i=1}^k, with each aiRqa_i \in R_q^\ell and biRqb_i \in R_q as above, recover ss.

In coefficient embedding, stacking each sample yields a system

b=As+e    (modq),b = A s + e \;\; (\bmod q),

with AZq(kn)×(n)A\in\mathbb{Z}_q^{(kn)\times(\ell n)}, sZqns\in\mathbb{Z}_q^{\ell n}, b,eZqknb, e\in\mathbb{Z}_q^{kn}. Standard LWE is recovered at =n\ell = n, k=1k=1 (with coefficient embedding), and Ring-LWE is the k==1k = \ell = 1 case.

The security assumption is that, for suitable (NIST/Kyber) parameters (e.g., n=256n=256, k=2,3k=2,3, q=3329q=3329, centered binomial error), no polynomial-time (even quantum) adversary can efficiently recover ss or distinguish (a,b)(a,b) samples from uniform (Liu et al., 2024, Wenger et al., 2024).

2. Parameter Choices and Standardized Instances

NIST's CRYSTALS-Kyber utilizes Module-LWE as a core hard problem underpinning its cryptographic security (Liu et al., 2024, Wenger et al., 2024). The standardized parameters are:

Variant nn kk qq Xs=XeX_s=X_e Ψη\Psi_\eta parameter Security Level
Kyber512 256 2 3329 Ψ2\Psi_2 (binomial) η=2\eta=2 NIST Level 1
Kyber768 256 3 3329 Ψ2\Psi_2 (binomial) η=2\eta=2 NIST Level 3
Kyber1024 256 4 3329 Ψ2\Psi_2 (binomial) η=2\eta=2 NIST Level 5

Module-LWE's algebraic structure allows efficient use of number-theoretic transforms for multiplication and reduces the representation size compared to standard LWE at the same security level.

3. Cryptanalytic Attacks and Concrete Security Benchmarks

Multiple cryptanalytic approaches have been investigated and benchmarked against Module-LWE, both in theory and in concrete implementation (Bassotto et al., 2 Oct 2025, Wenger et al., 2024):

  • Lattice Reduction (uSVP/Kannan’s Embedding): Attempts to solve the unique shortest vector problem via BKZ or similar lattice reduction. For Kyber parameters (n=256n=256), no successful recovery was reported within $1100$ hours.
  • SALSA (ML, Transformer-based): Achieves recovery of secrets with Hamming weight h11h\leq 11 (Kyber512: n=256n=256, k=2k=2) in $28$ hours using $256$ CPUs, $8$ hours using $256$ GPUs, at a 100%100\% success rate.
  • Cool & Cruel: Hybrid attack leveraging "cliff splitting" and greedy search recovers h11h\leq 11 in comparable time to SALSA but with improved GPU recovery (0.1 hours on $1024$ GPUs).
  • Dual Hybrid Meet-in-the-Middle (Decision-LWE): Feasible for h4h'\leq 4, limited by memory scaling (2h2^{h'} table size).

Attacks are much more effective for sparse secrets (hnh\ll n). Experimental results indicate that extremely sparse secrets (e.g., h11h\lesssim 11 for k=2k=2) should be avoided, which is well-mitigated by the binomial/“full-weight” secrets selected in standardized schemes.

4. Modular Reduction, "Wrap-Around," and Robust Attacks

A fundamental challenge in practical attacks is the loss of information due to modular reduction, introducing the "wrap-around" effect. The pre-modular sample is bi=ai,s+eiZb_i' = \langle a_i, s\rangle + e_i \in \mathbb{Z}, and the observation is bi=(bimodq)b_i = (b_i' \bmod q). If bi>q/2|b_i'|>q/2, the reduction "wraps around" the modulus, making regression and ML approaches less effective.

The NoMod ML-Attack (hybrid white-box robust regression) sidesteps direct modular modeling by treating wrap-around samples as statistical outliers (Bassotto et al., 2 Oct 2025). The attack proceeds by:

  1. Lattice-based reduction: Transform the system b=As+eb = A s + e via dual embedding, FLATTER, and BKZ to extract "reduced" equations.
  2. Amplification/Resampling: Utilize ring automorphisms (negacyclic rotations) and negative-circulant expansion for more samples.
  3. Selection and Pruning: Rank samples by estimated variance, discarding (putative) outliers.
  4. Robust Regression: Fit RbRAsR b \approx R A s using loss functions like Tukey's Biweight, ignoring wrap-around "outlier" equations.
  5. Recovery: The robust regression coefficients directly reveal ss.

Empirical results demonstrate NoMod's ability to fully recover binary and sparse (e.g., binomial) Module-LWE secrets for parameters well beyond previously claimed security margins (n=350n=350 for binary; n=256n=256 for sparse binomial; Kyber settings (n,k)=(128,3)(n,k)=(128,3) and (256,2)(256,2) for partial recovery) (Bassotto et al., 2 Oct 2025).

5. Key-Reconciliation, Lattice Quantizers, and Decryption Failure Rates

The reconciliation mechanism in Module-LWE-based KEMs such as Kyber is formally equivalent to quantizing an MLWE sample according to a high-dimensional lattice codebook (Liu et al., 2024). The key constructs are:

  • Nested lattice chain: Λ3Λ2Λ1Z\Lambda_3 \subseteq \Lambda_2 \subseteq \Lambda_1 \subset \mathbb{Z}^\ell.
  • Encoding: HelpRec(x)=QΛ1(x)modΛ2HelpRec(x) = Q_{\Lambda_1}(x)\bmod \Lambda_2.
  • Decoding: Rec(x,y)=QΛ2(xy)modΛ3Rec(x, y) = Q_{\Lambda_2}(x-y)\bmod \Lambda_3.

The decryption failure rate (DFR) is tightly bounded via noncentral chi-square tail bounds: DFR1[1Q/2(rcov(Λ1)σG,rpack(Λ2)σG)]n/\mathrm{DFR} \leq 1 - \left[ 1 - Q_{\ell/2}\left( \tfrac{r_{\mathrm{cov}}(\Lambda_1)}{\sigma_G},\, \tfrac{r_{\mathrm{pack}}(\Lambda_2)}{\sigma_G} \right) \right]^{n/\ell} where σG2\sigma_G^2 aggregates error, secret, and quantization variance, and QMQ_{M} is the generalized Marcum-Q function.

Optimal lattice quantizers (e.g., BW16, Leech24) can yield significant reductions in both bandwidth (up to 36%36\%) and DFR (down to 22632^{-263}) compared to the original 1D compressor, while maintaining provable security reductions (Liu et al., 2024).

6. Algorithmic Ingredients and Implementation Considerations

Implementation of Module-LWE-based systems involves:

  • Polynomial arithmetic: Number-theoretic transform (NTT) multiplication in RqR_q with prime qq for O(knlogn)O(k n \log n) complexity, supporting efficient KEM operations.
  • Lattice quantizers: For fixed small \ell, quantization is achieved by lookup or sphere decoding, with overall complexity O(n)O(n \ell) for n/n/\ell blocks. Recommended quantizers are BW16 (=16\ell=16) for security/DFR, Leech24 (=24\ell=24) for bandwidth.
  • Robust regression and reduction: Tukey’s Biweight (with threshold cc proportional to the noise standard deviation) effectively excludes modular wraparounds in NoMod attacks.
  • Randomness: Use of cryptographically secure RNGs is mandatory; linear-congruential generators introduce algebraic structure that lattice/ML attacks can exploit.

Preprocessing for attacks requires dual embedding, FLATTER, and BKZ 2.0, with block sizes 20β4020 \leq \beta \leq 40. Dominant cost per tour: 2282^{28} to 2302^{30} operations for BKZ-40 (Bassotto et al., 2 Oct 2025, Wenger et al., 2024).

7. Security Implications and Best Practices

Benchmarking results and analysis indicate:

  • Parameter selection: Maintaining secrets drawn from dense centered binomial distributions (e.g., Ψ2\Psi_2, Hamming weight h15h\gg 15 for k=2k=2) ensures a substantial safety margin; extremely sparse secrets are susceptible to both ML and robust-regression attacks (Wenger et al., 2024).
  • Attack landscape: Robust-regression (NoMod) and modern ML attacks (SALSA, Cool & Cruel) outperform classical lattice attacks at the standard parameter sizes and must now be included in concrete security assessments.
  • Failure rate reduction: Replacing 1D Kyber reconciliation with high-dimensional lattice quantizers delivers both bandwidth and DFR improvements without undermining security (Liu et al., 2024).
  • Evolving benchmarks: Empirical attack performance significantly deviates from theory, especially at larger qq, lower Hamming weights, and for modern hybrid attacks. Regular benchmarking and recalibration of cost models are essential.
  • Future recommendations: Continued refinement of ML attack architectures for better wrap-around handling, further BKZ/enumeration improvements, and additional benchmarks for higher-rank MLWE (e.g., k>3k>3) and structured encryption schemes.

The combination of modern attack techniques (ML, robust regression), optimized parameter selection, and reconciliation via lattice quantization forms the current state-of-the-art in both the security analysis and design of MLWE-based post-quantum cryptography.

References:

(Bassotto et al., 2 Oct 2025, Liu et al., 2024, Wenger et al., 2024)

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Module-Learning-With-Errors (Module-LWE).