Papers
Topics
Authors
Recent
2000 character limit reached

Approximating Activation Functions

Published 17 Jan 2020 in cs.LG, cs.PF, and stat.ML | (2001.06370v1)

Abstract: ReLU is widely seen as the default choice for activation functions in neural networks. However, there are cases where more complicated functions are required. In particular, recurrent neural networks (such as LSTMs) make extensive use of both hyperbolic tangent and sigmoid functions. These functions are expensive to compute. We used function approximation techniques to develop replacements for these functions and evaluated them empirically on three popular network configurations. We find safe approximations that yield a 10% to 37% improvement in training times on the CPU. These approximations were suitable for all cases we considered and we believe are appropriate replacements for all networks using these activation functions. We also develop ranged approximations which only apply in some cases due to restrictions on their input domain. Our ranged approximations yield a performance improvement of 20% to 53% in network training time. Our functions also match or considerably out perform the ad-hoc approximations used in Theano and the implementation of Word2Vec.

Citations (12)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.