Papers
Topics
Authors
Recent
2000 character limit reached

CKKS Scheme in Homomorphic Encryption

Updated 30 November 2025
  • CKKS scheme is a leveled, approximate homomorphic encryption protocol that enables SIMD-like operations on vectors of real or complex numbers while preserving privacy.
  • It employs ciphertext rescaling and bootstrapping to manage noise growth during homomorphic additions and multiplications, balancing precision with computational efficiency.
  • Performance enhancements via RNS, NTT/FFT optimizations, and hardware acceleration make CKKS suitable for large-scale secure machine learning and scientific computing applications.

The Cheon–Kim–Kim–Song (CKKS) scheme is a leveled, approximate homomorphic encryption (HE) protocol supporting SIMD-like, structure-preserving computation on vectors of complex or real numbers. It is based on the Ring Learning With Errors (RLWE) problem and specifically engineered for high-throughput, privacy-preserving analytics and machine learning on encrypted floating-point or fixed-point data. CKKS supports vectorized approximate addition, multiplication, and rotation, with ciphertext-level rescaling to control noise and precision. Its use now spans scientific computing, MLaaS, confidential control synthesis, and privacy-resilient outsourced inference, with ecosystem-scale deployments via OpenFHE, SEAL, TenSEAL, nGraph-HE2, and GPU-specialized libraries.

1. Algebraic Setting, Encoding, and Basic Algorithms

The CKKS scheme is constructed over the cyclotomic ring R=Z[X]/(XN+1)R = \mathbb{Z}[X]/(X^N + 1), where NN is a power of two. The plaintext space consists of N/2N/2 packed complex entries embedded via an approximate canonical embedding; plaintexts are encoded as polynomials m(X)m(X) such that numerical vectors z∈CN/2z \in \mathbb{C}^{N/2} are mapped to m(X)∈Rm(X) \in R by coefficient embedding and an inverse DFT, followed by multiplication with a global scaling factor Δ≫1\Delta \gg 1 and coefficient rounding. For a modulus qq (or RNS modulus chain {q0,…,qL}\{q_0, \ldots, q_L\}), the ciphertext ring is Rq=R/qRR_q = R/qR.

Key Generation and Encryption

  • Secret key: s(X)∈Rs(X) \in R is sampled from a small (ternary or discrete-Gaussian) error distribution.
  • Public key: (a(X),b(X)=a(X)â‹…s(X)+e(X))∈(R/qR)2(a(X), b(X) = a(X) \cdot s(X) + e(X)) \in (R/qR)^2 with a(X)a(X) uniform and e(X)e(X) sampled from the error distribution.
  • Encryption: Given a scaled and rounded plaintext m(X)m(X), sample new e0,e1e_0, e_1 and a′a', form ciphertext as:

c0=Δ⋅m(X)+a′⋅q+e0,c1=b(X)⋅Δm(X)+e1(modq)c_0 = \Delta \cdot m(X) + a' \cdot q + e_0,\quad c_1 = b(X) \cdot \Delta m(X) + e_1 \pmod{q}

or, in two-component form, c=(c0,c1)c = (c_0, c_1).

  • Decryption (with secret ss): Compute c0+c1s mod qc_0 + c_1 s \bmod q and divide by Δ\Delta, then decode to vector space.

Homomorphic Operations

  • Addition: c+c′=(c0+c0′,c1+c1′)c + c' = (c_0 + c_0', c_1 + c_1') (noise increases additively).
  • Multiplication: After component-wise product and expansion to (d0,d1,d2)(d_0, d_1, d_2), relinearization is performed via an evaluation key, and the resulting ciphertext is rescaled: coefficients and noise are each divided by (approximately) Δ\Delta and modulus is reduced to qi+1q_{i+1}.
  • Rotation: Cyclic slot rotation is supported via Galois/evaluation keys; requires additional keyswitching.
  • Rescaling: After multiplication, rescale divides scale by Δ\Delta (or qi/qi+1q_i/q_{i+1} in RNS). This both reduces the modulus and precision, and controls the growth of ciphertext magnitude.

Each of these operations preserves the vectorized structure, enabling "SIMD-like" parallel encrypted computation on large batches of values (Kholod et al., 29 Oct 2024, Pathak, 2022).

2. Noise Growth, Security, and Parameterization

Security is based on the decisional RLWE problem for the chosen ring and modulus parameters. Key parameters include:

  • Ring dimension NN: Security grows with NN, which must meet RLWE 128-bit threshold; also sets packing size.
  • Modulus chain {q0,...,qL}\{q_0, ..., q_L\}: Controls multiplicative depth LL. A longer chain allows more multiplications/levels but increases key and ciphertext size, and the error/noise budget must be managed relative to q0q_0 (Xu et al., 23 Nov 2025).
  • Scaling factor Δ\Delta: Governs fixed-point precision; larger Δ\Delta improves relative error but consumes more modulus bits on each operation.

Noise behaves as follows:

  • Homomorphic addition increases error additively; multiplication increases error multiplicatively (plus rescale truncation).
  • After tt multiplications, noise grows approximately like O(Bencâ‹…(ΔσN)t)O(B_{\mathrm{enc}} \cdot (\Delta \sigma \sqrt{N})^t), requiring parameter tuning so that after LL levels, final noise is much less than q0q_0 (Pathak, 2022).
  • Rescaling cuts both signal and noise by Δ\Delta (or equivalent).

Empirical studies show that, for tuned parameters, standard computations (matrix multiplications, ML inference) yield mean squared errors as low as 10−710^{-7} to 10−610^{-6}, with negligible impact on application-level accuracy (Khan et al., 2023).

3. Performance Engineering, SIMD Packing, and High-Throughput Implementations

CKKS naturally supports SIMD packing, encoding N/2N/2 complex numbers into a single ciphertext. These slots are operated on simultaneously via slot-wise arithmetic, supporting ultra-high-throughput batch analytics:

  • Specialized packing schemes are used for convolution, matrix-matrix, and vector-matrix operations, e.g., "im2col" for convolutional layers, diagonal and repeated/expanded formats for FHE-aware DNNs (Duc et al., 14 Jul 2025, Pirillo et al., 24 Jun 2025).
  • Implementations exploit RNS and NTT/FFT representations for efficient polynomial arithmetic (Kholod et al., 29 Oct 2024, Agulló-Domingo et al., 7 Jul 2025).
  • On CPUs, graph-level/engineered optimizations include batch-axis packing, constant-vs-vector encodings, scalar/plaintext kernel paths, and "lazy" rescaling. These allow nGraph-HE2, for instance, to achieve 1,998 images/s on CryptoNets (MNIST) (Boemer et al., 2019).
  • Hardware acceleration on systolic arrays (e.g., Cornami’s FracTLcore) or GPUs (FIDESlib, OpenFHE+HEXL) yields up to 70×70\times–300×300\times speedups for key operations, including bootstrapping and core HE primitives (Ovichinnikov et al., 15 Oct 2025, Agulló-Domingo et al., 7 Jul 2025).
  • FIDESlib demonstrates end-to-end bootstrapping times of 73.5 ms for 64 slots (∼\sim80×\times faster than AVX OpenFHE) and scales efficiently to 32k-slot settings.

Trade-offs in packing and parallelism must balance latency, ciphertext expansion, and the bottleneck on slot-wise multiplications and rotations. Highly parallel workloads can routinely reach close to GPU memory-bandwidth limits (Agulló-Domingo et al., 7 Jul 2025, Ovichinnikov et al., 15 Oct 2025).

4. Approximate Bootstrapping and Deep Circuits

To support circuits deeper than the modulus chain allows, CKKS uses approximate bootstrapping. The canonical Cheon–Kim–Kim–Song bootstrapping protocol proceeds as:

  1. CoeffToSlot (C2S): Homomorphic DFT transforms coefficient encoding to slot encoding.
  2. Approximate modulus reduction: Homomorphically evaluate the modular reduction (sawtooth or sine/cosine function) in each slot, using Chebyshev polynomial approximation and rescale after each step.
  3. SlotToCoeff (S2C): Inverse DFT to return to coefficient form.
  4. Key-switch and modulus refresh: Output is a ciphertext at refreshed modulus and reestablished noise budget, but with the approximate nature of the scheme preserved (i.e., noise floor cannot be reduced below initial encoding error).

Typical bootstrapping depth is 16–25 levels, constrained by polynomial approximation of modular reduction; application pipelines (e.g., fully encrypted deep-learning training in ReBoot (Pirillo et al., 24 Jun 2025)) interleave bootstrapping with blocks to manage cumulative noise and maintain precision. ReBoot demonstrates that this approach allows fully encrypted training of deep MLPs with competitive accuracy for image recognition and tabular ML, reducing training latency by up to 8.83×\times over prior frameworks.

5. Fault Tolerance, Error Sources, and Reliability Engineering

CKKS’s approximate design yields native resilience to low-order bit-flips in plaintexts, but performance-oriented variants utilizing RNS and NTT are highly sensitive to even a single coefficient-level or NTT residue error:

  • In "vanilla" CKKS (HEAAN, OpenFHE without RNS), LSBit flips in plaintexts yield L2L_2-error proportional to 2j/Δ2^j/\Delta; higher-order flips can catastrophically corrupt decrypted output (Mazzanti et al., 28 Jul 2025).
  • Enabling RNS or NTT optimizations amplifies error magnitude, causing decryption failure even for LSB flips; error in a single NTT residue reconstructs as a high-magnitude error in the full plaintext.
  • Practical error-resilient design advises large scaling factors Δ\Delta (to move error explosion threshold), sparsified slot allocation (for partial immunity), and, for ultra-reliable workloads, custom error-detecting codes at the plaintext or ciphertext level (Mazzanti et al., 28 Jul 2025).
  • Binary CKKS variants introduce BCH codes and operate over binary polynomial rings, achieving deterministic, bit-exact decryption with negligible failure probabilities, and replace rescaling with explicit bootstrapping (Refresh), simplifying complexity at modest speed cost for multiplications (Chen et al., 4 Aug 2025).

6. Parameter Configuration Automation and Application-Specific Tuning

CKKS parameter selection (ring dimension NN, modulus chain {qi}\{q_i\}, scaling factor Δ\Delta, and packing layout) is a high-dimensional, tightly coupled optimization:

  • Larger NN improves security but increases operation cost; deep chains support larger circuits but further increase computational burden (Xu et al., 23 Nov 2025).
  • The FHE-Agent framework employs an LLM-guided, multi-fidelity search process that combines static analysis, simulated profiling, and encrypted benchmarking to tune and validate configurations for given ML workloads. It automatically achieves precision and latency trade-offs unattainable with prior heuristics and is able to recover 128-bit RLWE-secure CKKS setups for complex models (e.g., AlexNet), where baseline compilers fail (Xu et al., 23 Nov 2025).

Typical application-tuned parameters include N=214N=2^{14} or 2152^{15}, chain length L=3L=3–$8$, modulus primes of 45–60 bits, and Δ=240\Delta=2^{40}–2592^{59}, as dictated by circuit depth and accuracy targets.

7. Research Applications, Limitations, and Future Directions

CKKS has been deployed for:

  • Secure cloud-based control synthesis (Model-based RL, value iteration, SARSA, Z-learning) with provable bounds on noise-induced error in limit points (Suh et al., 2021).
  • Large-scale privacy-preserving machine learning (encrypted inference and training, face detection on UAV images, neural network training with bootstrapping), with less than 1% accuracy loss versus plaintext (Pirillo et al., 24 Jun 2025, Duc et al., 14 Jul 2025).
  • Fast encrypted scientific computation (finite-difference PDEs, matrix multiplications) with negligible MSE under tuned settings (Kholod et al., 29 Oct 2024, Khan et al., 2023).
  • Efficient privacy-protecting ranking, order statistics, and SIMD-parallel sorting, exploiting slot-wise comparison logic (Mazzone et al., 19 Dec 2024).

Outstanding limitations include:

  • Bootstrapping cost, despite being reduced by GPU/FPGA acceleration and blockwise circuit design, remains a key bottleneck.
  • All applications depend critically on precise error budgeting (balancing scale, chain, and packed slots) and automated configuration is an ongoing field of research.
  • Division and non-polynomial non-linearities require careful function-fitting as CKKS natively only supports addition and multiplication.
  • Resistance to hardware faults (e.g., bitflips in RNS/NTT) is not guaranteed without redundancy.

Continued developments focus on fault tolerance, low-overhead bootstrapping, multi-GPU scaling, and integration of hybrid HE-MPC and circuit privacy protocols.


For a detailed technical treatment and implementation guidance, see (Kholod et al., 29 Oct 2024, Agulló-Domingo et al., 7 Jul 2025, Pathak, 2022, Xu et al., 23 Nov 2025, Pirillo et al., 24 Jun 2025, Mazzanti et al., 28 Jul 2025, Mazzone et al., 19 Dec 2024, Khan et al., 2023, Suh et al., 2021, Duc et al., 14 Jul 2025, Boemer et al., 2019, Chen et al., 4 Aug 2025), and (Ovichinnikov et al., 15 Oct 2025).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to CKKS Scheme.