Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Duality Principle and Biologically Plausible Learning: Connecting the Representer Theorem and Hebbian Learning (2309.16687v1)

Published 2 Aug 2023 in cs.NE, cs.AI, and q-bio.NC

Abstract: A normative approach called Similarity Matching was recently introduced for deriving and understanding the algorithmic basis of neural computation focused on unsupervised problems. It involves deriving algorithms from computational objectives and evaluating their compatibility with anatomical and physiological observations. In particular, it introduces neural architectures by considering dual alternatives instead of primal formulations of popular models such as PCA. However, its connection to the Representer theorem remains unexplored. In this work, we propose to use teachings from this approach to explore supervised learning algorithms and clarify the notion of Hebbian learning. We examine regularized supervised learning and elucidate the emergence of neural architecture and additive versus multiplicative update rules. In this work, we focus not on developing new algorithms but on showing that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms. We argue that many past and current advancements in the field rely on some form of dual formulation to introduce biological plausibility. In short, as long as a dual formulation exists, it is possible to derive biologically plausible algorithms. Our work sheds light on the pivotal role of the Representer theorem in advancing our comprehension of neural computation.

Summary

We haven't generated a summary for this paper yet.