Papers
Topics
Authors
Recent
2000 character limit reached

Data-Driven Randomized Learning of Feedforward Neural Networks

Published 11 Aug 2019 in cs.LG, cs.NE, and stat.ML | (1908.03891v1)

Abstract: Randomized methods of neural network learning suffer from a problem with the generation of random parameters as they are difficult to set optimally to obtain a good projection space. The standard method draws the parameters from a fixed interval which is independent of the data scope and activation function type. This does not lead to good results in the approximation of the strongly nonlinear functions. In this work, a method which adjusts the random parameters, representing the slopes and positions of the sigmoids, to the target function features is proposed. The method randomly selects the input space regions, places the sigmoids in these regions and then adjusts the sigmoid slopes to the local fluctuations of the target function. This brings very good results in the approximation of the complex target functions when compared to the standard fixed interval method and other methods recently proposed in the literature.

Citations (9)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.