Papers
Topics
Authors
Recent
2000 character limit reached

Sparse Code Formation with Linear Inhibition

Published 13 Mar 2015 in cs.CV | (1503.04115v1)

Abstract: Sparse code formation in the primary visual cortex (V1) has been inspiration for many state-of-the-art visual recognition systems. To stimulate this behavior, networks are trained networks under mathematical constraint of sparsity or selectivity. In this paper, the authors exploit another approach which uses lateral interconnections in feature learning networks. However, instead of adding direct lateral interconnections among neurons, we introduce an inhibitory layer placed right after normal encoding layer. This idea overcomes the challenge of computational cost and complexity on lateral networks while preserving crucial objective of sparse code formation. To demonstrate this idea, we use sparse autoencoder as normal encoding layer and apply inhibitory layer. Early experiments in visual recognition show relative improvements over traditional approach on CIFAR-10 dataset. Moreover, simple installment and training process using Hebbian rule allow inhibitory layer to be integrated into existing networks, which enables further analysis in the future.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.