Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 41 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

PRIVSPIKE: Encrypted SNN Inference

Updated 12 October 2025
  • PRIVSPIKE is a privacy-preserving inference framework for deep Spiking Neural Networks that uses CKKS-based homomorphic encryption to perform computations on encrypted data.
  • It employs a Chebyshev polynomial approximation and a scheme-switching method to simulate the Leaky Integrate-and-Fire activation function under encryption.
  • Experimental evaluations show that PRIVSPIKE achieves high accuracy with low inference latency, making it practical for applications in privacy-sensitive domains such as healthcare.

PRIVSPIKE is a privacy-preserving inference framework specifically designed for deep Spiking Neural Networks (SNNs) that leverages homomorphic encryption—most notably the CKKS scheme—to enable secure, encrypted computation of SNN models on sensitive data. This system is architected to address the confidentiality concerns inherent to deep learning applications on large and potentially private datasets, offering both robust cryptographic privacy guarantees and support for the energy-efficient, event-driven computation paradigms characteristic of SNNs (Njungle et al., 5 Oct 2025).

1. Motivations and Design Goals

SNNs are valued for their sparse, event-driven activations, low power consumption, and suitability for neuromorphic hardware. However, even with these advantages, the privacy concerns of deep learning—such as data exposure during inference—persist when models are deployed in an untrusted environment (e.g., MLaaS, cloud, edge). PRIVSPIKE addresses these issues by employing fully homomorphic encryption (FHE) so that inference can be performed directly on ciphertexts without ever revealing sensitive input or intermediate computations to the computation provider. This is especially relevant in domains with regulatory or consumer privacy constraints, such as healthcare and biomedical sensing.

2. Homomorphic Encryption and the CKKS Scheme

At the core of PRIVSPIKE is the Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme, which is optimized for approximate real-number arithmetic critical for neural network inference. CKKS provides:

  • Approximate arithmetic on encrypted floating-point values, minimizing reconstruction error via controlled precision loss.
  • SIMD packing: Many input values are encoded as polynomial coefficients in the ring R=Z[X]/(XN+1)R = \mathbb{Z}[X]/(X^N + 1), allowing SIMD (Single Instruction Multiple Data)-style parallelism for matrix and convolutional operations.
  • Efficient ciphertext rotation and addition/multiplication—crucial for implementing deep network layers and feature maps.
  • Security based on RLWE (Ring Learning With Errors)—ensuring input confidentiality in the cryptographic sense.
  • Slot reuse and rotation key optimization—further reducing computation and memory overhead in real-world deployments.

PRIVSPIKE utilizes CKKS to encrypt not just the raw input data, but also all intermediate activations and the outputs of each SNN layer during inference, ensuring persistent privacy throughout the model pipeline.

3. Encrypted SNN Inference: Algorithmic Innovations

Central to enabling SNN inference under homomorphic encryption is the challenge of evaluating the Leaky Integrate-and-Fire (LIF) activation function, which inherently involves a thresholding (greater-than) operation. Since CKKS supports only polynomial operations, PRIVSPIKE introduces two strategies:

A. Polynomial Approximation for LIF Firing

  • Membrane potential V(t)V(t) is scaled to [1,1][-1, 1], and the thresholding function is approximated by a Chebyshev polynomial expansion:

f(x)n=0NcnTn(x)f(x) \approx \sum_{n=0}^{N} c_n T_n(x)

where Tn(x)T_n(x) are Chebyshev polynomials defined recursively (T0(x)=1T_0(x) = 1, T1(x)=xT_1(x) = x, Tn(x)=2xTn1(x)Tn2(x)T_n(x) = 2xT_{n-1}(x) - T_{n-2}(x)). The polynomial order (e.g., N=50N=50) governs the fidelity and computational cost. This allows the firing decision to be approximated as a smooth polynomial, compatible with the operations supported by CKKS.

B. Scheme-Switching Algorithm

  • For higher-precision spike decisions, PRIVSPIKE implements a scheme-switching approach: most SNN operations are performed under CKKS, but the membrane potential is converted to a Boolean-friendly scheme (e.g., TFHE) just for the spike comparison. After the spike event is computed (e.g., for V(t)>ThV(t) > \text{Th}), the resulting spike vector (binary) is switched back into the CKKS domain for further computation.
  • This hybrid approach increases precision—yielding encrypted inference accuracy nearly matching plaintext—but at the expense of increased computational cost and ciphertext conversion overhead.

Both algorithms support arbitrary SNN depth and are compatible with complex event-driven architectures such as LeNet-5 and ResNet-19.

4. Experimental Validation and Performance

PRIVSPIKE was benchmarked on MNIST, CIFAR-10, Neuromorphic MNIST (N-MNIST), and CIFAR-10 DVS using both LeNet-5 and ResNet-19 SNN models.

Dataset/Model Plaintext Accuracy Poly. Approx. Acc. Scheme-Switch Acc. Poly. Time (s) Scheme-Switch Time (s)
MNIST, LeNet-5 98.90% 95.70% 98.10% 28 110
N-MNIST, LeNet-5 99.02% 95.30% 97.30% 212 714
CIFAR-10, ResNet-19 83.19% 76.00% 79.30% 784 3264
C10-DVS, ResNet-19 68.10% 64.71% 66.00% 1846 8167
  • The polynomial approximation method is significantly faster (e.g., MNIST inference in 28 seconds) but with slightly lower accuracy due to approximation error.
  • The scheme-switching approach achieves accuracy nearly identical to unencrypted (plaintext) SNNs (e.g., only 0.8% drop on MNIST), but increases inference time by a factor of 4–6.

Memory use is minimized by exploiting CKKS slot SIMD packing and rotation key reuse, providing efficiency for large batch inference or deployment on constrained hardware.

5. Comparison with Previous Privacy-Preserving SNN Schemes

PRIVSPIKE advances the state of the art in encrypted SNN inference:

  • Latency and Scalability: Previous FHE-based SNN approaches (e.g., FHE-DiCNN or the method of Farzad et al.) typically require 40–60 time steps per image with inference times often exceeding 900 seconds per image for MNIST/Fashion-MNIST, or over 250 hours for AlexNet-level SNNs. PRIVSPIKE achieves inference in 28 seconds (MNIST, LeNet-5, 2 time steps).
  • Accuracy retention: Prior work often incurs >2–5% drop in accuracy; PRIVSPIKE can deliver <1% drop, especially when using the scheme-switching method.
  • Efficiency and practical deployability: The use of SIMD slot packing, low memory overhead for key management, and compatibility with consumer-grade CPUs positions PRIVSPIKE as a practical solution for real-world deployment.

The results indicate a clear trade-off space: the polynomial approximation yields the lowest latency, while scheme-switching provides near-plaintext accuracy with higher resource requirements.

6. Technical Details of Implementation

PRIVSPIKE’s SNN computation proceeds as follows:

  • LIF update in encrypted domain:

V(t+1)=(1Δt/τm)V(t)+I(t)V(t+1) = (1-\Delta t/\tau_m) \cdot V(t) + I(t)

where V(t)V(t) and I(t)I(t) are encrypted vectors in CKKS slots.

  • Chebyshev polynomial calculation:

The non-linear thresholding is implemented via recursive slot-wise polynomial multiplications and additions:

T0(x)=1T1(x)=xTn(x)=2xTn1(x)Tn2(x) for n2T_0(x) = 1 \quad T_1(x) = x \quad T_n(x) = 2x T_{n-1}(x) - T_{n-2}(x) \text{ for } n\geq2

This enables deep networks with high-arity neurons to be simulated homomorphically.

  • Scheme switching pseudocode:

The process for moving CKKS ciphertexts to TFHE (and back) for the firing event is documented—calls such as CKKSSwitchToTFHEW are employed to enable precise encrypted comparison.

Optimization techniques, including precomputed rotation key management and efficient accumulation for convolution operations, are central to PRIVSPIKE's system-level efficiency.

7. Future Directions

Several extensions and open challenges remain:

  • Encrypted Training: While PRIVSPIKE focuses on inference, extending to secure training (e.g., encrypted backpropagation with high-dimensional ciphertexts) is a natural target for future work.
  • Greater model expressiveness: Scheme-switching can be further optimized or parallelized to support even deeper or more complex SNN topologies or other spike-based architectures.
  • Hardware acceleration: While evaluated on CPUs, leveraging GPU- or ASIC-based homomorphic operations may further close the practical gap for real-time neuromorphic inference.
  • Algorithmic improvement: Exploiting advances in HE parameter selection, polynomial approximation tightness, and alternative mixed-scheme switching to further optimize both latency and precision.

Conclusion

PRIVSPIKE provides a scalable, high-accuracy, privacy-preserving framework for SNN inference on encrypted data. By tightly integrating CKKS-based homomorphic encryption with tailored algorithms for SNN firing dynamics—including polynomial approximation and scheme-switching—it achieves practical inference speeds and small accuracy loss, outperforming prior FHE approaches for deep SNNs. This establishes a viable direction for deploying secure, cryptographically protected neuromorphic inference in privacy-critical applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to PRIVSPIKE.