Spiking Neural Network Model
- Spiking Neural Networks (SNNs) are networks where neurons communicate via discrete, time-stamped binary spikes, mirroring biological signaling.
- They use a probabilistic framework with synchronous updates and sigmoid-based firing probabilities to formalize and predict network behavior.
- SNNs enable modular designs through operable composition and hiding operators, supporting applications from Boolean circuits to attention mechanisms.
Spiking Neural Networks (SNNs) are a class of neural network models in which information is represented and processed by discrete, time-stamped spike events, closely mirroring the event-driven, binary, and temporal nature of biological neural signaling. In contrast to classical artificial neural networks (ANNs) characterized by analog, continuous-valued activations, SNNs model networks as graphs of spiking neurons whose states evolve in discrete time, with neuron outputs reflecting binary (firing/not firing) values and network behavior described statistically through the sequences of collective firings ("traces") over input and output neurons.
1. Mathematical and Probabilistic Foundations
The formal SNN model is defined on a directed graph of neurons, partitioned into input, output, and internal units. Each neuron, at any discrete time step, is in one of two states—active (firing, 1) or inactive (not firing, 0). The global network state at time is a vector of Boolean values over all neurons; a run of the network consists of a sequence of such states, or an "execution." Notably, the model considers synchronous updating: all neurons compute their next state on each clock tick.
The core of the SNN's stochastic behavior lies in its transition mechanism. For each neuron , the membrane potential at a given time, , is a weighted sum of active (firing) presynaptic neurons minus a bias:
The probability that neuron fires in the next time step is determined via a sigmoid function:
where is a temperature parameter. At each step, spikes are generated independently for each neuron conditioned on the history.
Given an infinite input execution (an infinite sequence of firing patterns on the input neurons), the SNN induces a unique probability distribution over full executions (and, by projection, over the observable traces on input and output), with finite trace probabilities defined recursively via one-step conditionals:
These probabilistic semantics, operating over sequences of binary vectors, formalize how stochasticity and temporal structure combine in SNN models (Lynch et al., 2018).
2. Formal SNN Network Structure
An SNN is defined by:
- A set of neurons (input, output, internal; the partitions are disjoint).
- A set of directed, weighted edges ("synapses"), each with a nonzero weight.
- For all , an associated bias.
The distinction among input, output, and internal neurons is operationally significant: input neurons are externally driven; output neurons' firing dynamics are externally observable; internal neurons both sense and influence the rest of the network but are not externally visible.
This structure permits clear specification and analysis of network interfaces, modular composition, and restriction or "hiding" of traces in larger network assemblies.
3. External Behavior and Trace Distributions
The externally observable behavior is formalized through "traces," i.e., sequences of binary firing patterns on the input and output neurons. The network, for a given infinite input sequence, induces a probability distribution on all finite and infinite observable traces:
An alternative representation specifies conditional probabilities at each step:
The equivalence of these definitions ensures that the SNN's behavior can be exhaustively described through either full-trace or stepwise conditional distributions. This formalism enables precise, probabilistic reasoning about what is "visible" to an external observer of the network.
4. Compositional Operators: Composition and Hiding
Two fundamental network operators enable modular construction and abstraction:
A. Composition ():
Two SNNs and are "compatible" if certain conditions hold (e.g., no neuron serves conflicting roles; internal neuron labels are unique). Their composition, , merges the neuron sets (with appropriate role inheritance) and unifies the directed edge structures.
- For acyclic compositions where outputs from feed into inputs to (but not vice versa), the joint external behavior factors:
- In general (possibly cyclic), the stepwise conditionals factor over time:
This compositionality guarantees that the probabilistic trace distribution of a compound network is uniquely determined by the external behavior of its components.
B. Hiding Operator:
For , hiding reclassifies these outputs as internal: . Observed traces exclude these outputs. The external trace probability is computed by projection:
where is the set of traces in the original network projecting to over the visible neurons.
These operators are essential for scalable, hierarchical modeling and for managing interfaces in complex SNN systems (Lynch et al., 2018).
5. Canonical Examples
The model is illustrated through several constructions:
- Boolean Circuits: SNNs implementing logic gates (AND, OR, NOT); specific weights and biases are chosen so that, with high probability, outputs realize the truth table of each gate. These gates are then composed into larger circuits.
- Attention Networks: Built from a Winner-Take-All (WTA) sub-network and a Filter sub-network, assembled using the composition operator. The WTA ensures exactly one output neuron is active (with vanishing error probability), and the Filter selects outputs contingent on the WTA's decision, yielding an attention mechanism analyzable through external trace distributions.
- Cyclic (Mutual) Composition: Networks where two SNN modules mutually influence each other, forming cycles. The synchronous update semantics ensure the system can be analyzed inductively on time, and joint output probabilities are bounded using stepwise independence lemmas.
These examples demonstrate the capacity of the framework for precise modular analysis of both classical computation (Boolean logic) and neural computational motifs (attention) (Lynch et al., 2018).
6. Formalizing and Solving Problems with SNNs
A "problem" for an SNN is formally defined as a mapping from infinite input traces to sets of "possibility" functions, each such function assigning a probability distribution to externally visible finite traces, subject to normalization and consistency constraints.
An SNN is said to "solve" a problem if, for every infinite input trace, the externally observed trace distribution induced by belongs to the corresponding set in .
Critically, the compositional operators preserve problem solving:
- If and solve problems and , then solves , where composition of problems is defined via the product of the respective trace conditional probabilities.
- If solves , then solves the "hidden" version of , where the trace probabilities are sums over all traces projecting to each observable trace.
Problems like winner-take-all selection, filtering, and composite attention can then be described and verified in this precise probabilistic language, allowing for compositional, scalable SNN design and analysis.
7. Significance and Theoretical Implications
This rigorous compositional SNN framework enables:
- Exact probabilistic reasoning about SNN behavior, including uncertainty propagation and error bounds.
- Scalable construction and verification of complex networks from primitives, using composition and hiding to control interaction scopes and abstraction boundaries.
- Modular specification and analysis of neural computational tasks in probabilistic terms, supporting formal verification and modular synthesis.
The separation of observed and internal state, along with compositional operators grounded in concurrency theory, creates a foundation for "formal neural engineering" with explicit trace-driven semantics (Lynch et al., 2018).
By combining stochastic, synchronous Boolean neuron dynamics with well-founded interfaces for composition and abstraction, and providing formal definitions of network "problems" that can be solved, this model establishes a foundation for rigorous analysis, reasoning, and design of SNN systems—including both classical computational examples and tasks representative of higher-order neural computation.