Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 31 tok/s
GPT-5 High 45 tok/s Pro
GPT-4o 104 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 206 tok/s Pro
2000 character limit reached

Learning Sparse, Distributed Representations using the Hebbian Principle (1611.04228v1)

Published 14 Nov 2016 in cs.LG

Abstract: The "fire together, wire together" Hebbian model is a central principle for learning in neuroscience, but surprisingly, it has found limited applicability in modern machine learning. In this paper, we take a first step towards bridging this gap, by developing flavors of competitive Hebbian learning which produce sparse, distributed neural codes using online adaptation with minimal tuning. We propose an unsupervised algorithm, termed Adaptive Hebbian Learning (AHL). We illustrate the distributed nature of the learned representations via output entropy computations for synthetic data, and demonstrate superior performance, compared to standard alternatives such as autoencoders, in training a deep convolutional net on standard image datasets.

Citations (9)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.