Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
51 tokens/sec
GPT-5 Medium
24 tokens/sec
GPT-5 High Premium
17 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
92 tokens/sec
GPT OSS 120B via Groq Premium
458 tokens/sec
Kimi K2 via Groq Premium
222 tokens/sec
2000 character limit reached

Unsupervised Learning with Self-Organizing Spiking Neural Networks (1807.09374v1)

Published 24 Jul 2018 in cs.NE and cs.LG

Abstract: We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters via excitatory-inhibitory interactions among populations of neurons. We develop and test various inhibition strategies, such as growing with inter-neuron distance and two distinct levels of inhibition. The quality of the unsupervised learning algorithm is evaluated using examples with known labels. Several biologically-inspired classification tools are proposed and compared, including population-level confidence rating, and n-grams using spike motif algorithm. Using the optimal choice of parameters, our approach produces improvements over state-of-art spiking neural networks.

Citations (64)

Summary

Unsupervised Learning with Self-Organizing Spiking Neural Networks

In "Unsupervised Learning with Self-Organizing Spiking Neural Networks", the authors investigate the combination of Self-Organizing Maps (SOMs) and Spiking Neural Networks (SNNs) to cultivate a hybrid system that leverages self-organization and unsupervised learning capabilities. Their approach experiments with inhibitory mechanisms to not only enhance classification accuracy but to also mimic biologically inspired clustering in neural populations.

Methodological Approach

The research presents a multi-layer architecture embedding an SNN with modifications inspired by SOM properties. The architecture adopts an input layer coupled with excitatory and inhibitory layers, where the input pixels link to excitatory neurons via modifiable synapses. The synaptic strength evolves through Spike-Timing Dependent Plasticity (STDP), and neuron activation in response to pixel intensity is regulated through Poisson spiking processes.

Recognizing the potential drawbacks of large fixed inhibition levels as found in existing SNN frameworks, the authors introduce dynamic inhibition adjustments that correlate with inter-neuron distances, analogous to SOM learning strategies. Additionally, inhibition levels are manipulated throughout the training phase—growing gradually or using two-level stratifications to optimize filter formation and network learning speed.

Evaluation of Representation and Classification Techniques

The classification methodologies employed by the authors rely on both population-level and spike sequence methods. Techniques such as neuron labeling and voting schemes—including all and confidence weighting—utilize the activity of excitatory neurons to perform classification tasks. Specifically, the paper highlights the efficacy of n-gram evaluations, leveraging spike order to discern data categorization, which is indicative of the importance of spike timing in neural computations.

Numerical Results and Implications

The presented experimental outcomes demonstrate an improvement over the performance of conventional SNNs, particularly under conditions where computational resources or data quantity are constrained. With various inhibition strategies, the LM-SNN models show superiority in clustering related data during unsupervised learning, aligning neuron activity closely with input categorizations. Explicit performance advancements were exhibited across different neuron configurations, especially using voting schemes that consider sophisticated aspects of spike patterns.

The ability of LM-SNNs to retain accuracy even with reduced connectivity elucidates their robustness, suggesting potential applications in environments with incomplete data or unreliable hardware. By reducing the necessity for global gradient-based learning approaches inherent in back-propagation, these models foster efficient training cycles and mitigate the memory bottlenecks associated with deep learning paradigms.

Conclusions and Future Directions

The proposed framework marks a significant stride in developing viable unsupervised learning algorithms for SNNs, encouraging additional exploration in biologically plausible inhibition mechanisms and enhanced information retrieval via spike timing. Future research endeavors might seek to deepen the understanding of spike ordering implications or expand biologically inspired inhibition strategies to further refine the self-organization paradigm exhibited in this paper. The utilization of self-organizing spiking networks offers promising trajectories not only in clustering and classification but also in forming foundations for neuromorphic computations and real-time decision-making processes.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com