Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How and what to learn:The modes of machine learning (2202.13829v2)

Published 28 Feb 2022 in cs.LG, cond-mat.dis-nn, physics.data-an, and stat.ML

Abstract: Despite their great success, neural networks still remain as black-boxes due to the lack of interpretability. Here we propose a new analyzing method, namely the weight pathway analysis (WPA), to make them transparent. We consider weights in pathways that link neurons longitudinally from input neurons to output neurons, or simply weight pathways, as the basic units for understanding a neural network, and decompose a neural network into a series of subnetworks of such weight pathways. A visualization scheme of the subnetworks is presented that gives longitudinal perspectives of the network like radiographs, making the internal structures of the network visible. Impacts of parameter adjustments or structural changes to the network can be visualized via such radiographs. Characteristic maps are established for subnetworks to characterize the enhancement or suppression of the influence of input samples on each output neuron. Using WPA, we discover that neural network store and utilize information in a holographic way, that is, subnetworks encode all training samples in a coherent structure and thus only by investigating the weight pathways can one explore samples stored in the network. Furthermore, with WPA, we reveal fundamental learning modes of a neural network: the linear learning mode and the nonlinear learning mode. The former extracts linearly separable features while the latter extracts linearly inseparable features. The hidden-layer neurons self-organize into different classes for establishing learning modes and for reaching the training goal. The finding of learning modes provides us the theoretical ground for understanding some of the fundamental problems of machine learning, such as the dynamics of learning process, the role of linear and nonlinear neurons, as well as the role of network width and depth.

Citations (1)

Summary

We haven't generated a summary for this paper yet.