Papers
Topics
Authors
Recent
Search
2000 character limit reached

CogniSNN: Cognition-Inspired Spiking Neural Network

Updated 28 January 2026
  • CogniSNN is a cognition-inspired spiking neural network paradigm that employs random graph architectures to mimic the adaptive, heterogeneous connectivity of biological neural circuits.
  • It features innovations such as binary OR-skip residual connections and tailored pooling mechanisms that enhance gradient flow and depth scalability.
  • Dynamic Growth Learning and pathway plasticity enable continual and transfer learning, reducing catastrophic forgetting and facilitating energy-efficient neuromorphic deployment.

CogniSNN is a cognition-inspired spiking neural network (SNN) paradigm that leverages random graph architectures (RGA) to achieve enhanced depth-scalability, pathway plasticity, and dynamic configurability, departing from the traditional chain-like hierarchical backbones of artificial neural networks and most SNNs. Grounded in neurobiological principles, CogniSNN systematically models neuron-expandability, pathway-reusability, and synaptic structural plasticity, enabling high performance, robustness, and hardware readiness for neuromorphic deployment (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).

1. Biological Motivation and Principle

In contrast to the rigid, layered configurations prevalent in conventional SNNs and ANNs, biological neural circuits exhibit stochastic, non-uniform connectivity and form complex, dynamic networks. CogniSNN directly incorporates this architectural diversity, drawing on three key phenomena observed in neural systems:

  • Neuron-Expandability: Large, deeply interconnected circuits support arbitrarily deep, adaptive signal pathways.
  • Pathway-Reusability: Critical sub-circuits are selectively reused during learning and transfer, conferring continual learning capabilities.
  • Dynamic-Configurability: Structural plasticity driven by activity leads to growth or pruning of pathways, creating a dynamic substrate for computation (Huang et al., 12 Dec 2025).

This approach embeds the random, sparse, and adaptive connectivity of the cortex into the computational substrate of SNNs, enabling new forms of learning and inference that better reflect biological plausibility and robustness.

2. Random Graph Architecture (RGA) Construction

CogniSNN defines its core computational structure as a directed acyclic graph G=(V,E)G=(V,E) over NN nodes ("ResNodes"). Two complementary RGA instantiations are supported:

  • Erdős–Rényi (ER) Model: Each possible directed edge (ij)(i \to j) is included independently with probability pp. The adjacency matrix AA is sampled as Aij=Bernoulli(p)wijA_{ij} = \text{Bernoulli}(p) \cdot w_{ij}, where wijw_{ij} are trainable synaptic weights (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).
  • Watts–Strogatz (WS) Small-World Model: Begins with a kk-regular lattice and rewires each outgoing edge with probability β\beta, yielding networks with high clustering and short path lengths.

Signal propagation in the RGA requires handling arbitrary predecessor sets and, crucially, resolution of feature-map dimensionality mismatches. This is achieved through a two-stage pooling process at each node:

  • Standard Pooling (SP): Applies average pooling when incoming feature dimensions meet a threshold.
  • Tailored or Adaptive Pooling (TP/AP): Aligns all incoming features to the minimum spatial dimension among predecessors, ensuring compatibility for further processing (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).

3. Spiking Residual Node and Binary OR-Skip Mechanism

Each ResNode comprises two serial Conv–BatchNorm–SpikingNeuron (ConvBNSN) blocks. The output is determined by a purely spike-domain OR-gate residual connection:

Oi[t]=OR(Oi1[t],Oi2[t])=Oi1[t]+Oi2[t]Oi1[t]Oi2[t]O_i[t] = \text{OR}(O_i^1[t], O_i^2[t]) = O_i^1[t] + O_i^2[t] - O_i^1[t] \odot O_i^2[t]

where Oi1O_i^1 and Oi2O_i^2 are the outputs of the first and second stages, respectively, and \odot denotes element-wise multiplication. This design has several computational and neurobiological advantages:

  • Identity Mapping and Gradient Flow: If Oi20O_i^2 \equiv 0, the OR connection implements a strict identity mapping, ensuring no vanishing/exploding gradients and enabling arbitrarily deep paths without degradation (Huang et al., 9 May 2025).
  • All-binary Spiking Domain: Maintains binary spikes throughout, preventing signal explosion associated with additive skips and conserving event-driven semantics.
  • Robust Depth-Scalability: Demonstrated graceful accuracy degradation even with 40 ResNodes in series, with only a minor drop (e.g., 92.2% to 88.3% on DVS-Gesture) (Huang et al., 9 May 2025).

Adaptive pooling within each node ensures valid spike-encoded feature combination even under random connectivity (Huang et al., 12 Dec 2025).

4. Pathway Plasticity and Knowledge Transfer

CogniSNN introduces an explicit path-level plasticity mechanism, enabling continual and transfer learning in a biologically plausible manner. The central technical innovation is the use of betweenness centrality to identify "key" pathways:

  • Pathway Definition: A path p={v1,e1,v2,,eL,vL+1}p = \{v_1, e_1, v_2, \dots, e_L, v_{L+1}\} is a node/edge sequence through GG.
  • Betweenness Centrality (BC):

BC(p)=vpBC(v)+epBC(e)BC(p) = \sum_{v \in p} BC(v) + \sum_{e \in p} BC(e)

  • Key Pathway Selection: For transfer to a similar task, top-KK (high-BC) paths are chosen for fine-tuning; for dissimilar tasks, bottom-KK (low-BC) paths are selected to promote new pathway formation. K=1K=1 is standard (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).

During transfer or continual learning (Learning without Forgetting, LwF), all weights except those on the selected key pathway and classifier are frozen:

  1. Update only pathway-specific and classifier parameters using surrogate gradients.
  2. Optimize a composite loss: old-task distillation, new-task cross-entropy, and L2 regularization.

This framework achieves \sim10% less catastrophic forgetting and slightly higher new-task accuracy compared to vanilla LwF (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).

5. Dynamic Growth Learning Mechanism

To emulate synaptic structural plasticity and support temporal robustness, CogniSNN implements Dynamic Growth Learning (DGL):

  • At each timestep tt within TT, a progressively growing subgraph—top q(t)=tP/Tq(t)=\lfloor t\cdot|P|/T\rfloor key paths—is activated for inference.
  • Early timesteps use only a few critical pathways; as time advances, more of the network is utilized.
  • During training, each timestep tt backpropagates error only through the active subgraph P(t)P^{(t)}.

This mechanism enables:

  • Timestep-Flexible Inference: CogniSNN can run at variable timesteps, outperforming static models under reduced-latency constraints (>10%>10\% accuracy gains when TT is cut at test time).
  • Noise Robustness: Superior resistance to input corruption and frame-dropout, with 9–16% absolute advantage under severe noise (Huang et al., 12 Dec 2025).

6. Empirical Evaluation and Comparative Performance

CogniSNN has been benchmarked on multiple neuromorphic and vision datasets:

Dataset TT CogniSNN (ER-RGA-7) Baseline (SOTA)
DVS-Gesture 16 98.61±0.11%98.61 \pm 0.11\% Spikformer 98.3%98.3\%
CIFAR10-DVS 5 79.8±0.1%79.8 \pm 0.1\% SSNN 73.63%73.63\%
N-Caltech101 5 80.64±0.15%80.64 \pm 0.15\% EventMix 79.5%79.5\%
Tiny-ImageNet 4 55.41±0.17%55.41 \pm 0.17\% Joint-SNN 55.39%55.39\%

Key effects:

  • RGA confers 2–4% gain over chain-like architectures.
  • OR residual outperforms ADD/AND/no-skip, with up to 9% improvement over no-skip and \sim0.4% over ADD.
  • On continual learning benchmarks (e.g., CIFAR100→CIFAR10, CIFAR100→MNIST), CogniSNN reduces forgetting by 5–10% versus chain SNNs under LwF.
  • DGL confers up to 17.4% higher accuracy when constrained to a single timestep under DVS-Gesture, indicating a high degree of runtime flexibility (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025).

7. Suitability for Neuromorphic Hardware

CogniSNN’s architecture is tailored for energy-efficient neuromorphic deployment:

  • OR-Gate Residuals: Implemented with simple binary logic, requiring no real-valued accumulators and minimal MACs, resulting in 10–15% lower energy consumption relative to additive skip designs.
  • Sparse Matrix Storage: RGA naturally maps to sparse representations (e.g., CSR) on hardware.
  • Dynamic Structure: Supports masking and progressive subgraph activation per timestep (matching DGL), which aligns with event-driven processing requirements.
  • On-Chip Learning: KP-LwF fine-tuning requires updates to only a small subnetwork, significantly reducing the number of memory writes and power usage. Path selection indices can be stored as compact lookup tables.

This suggests that CogniSNN may provide a unifying model for both high-performing brain-inspired computation and real-time adaptive hardware (Huang et al., 12 Dec 2025).


References: (Huang et al., 9 May 2025, Huang et al., 12 Dec 2025)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to CogniSNN.