Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Global-Connected Net With The Generalized Multi-Piecewise ReLU Activation in Deep Learning (1807.03116v1)

Published 19 Jun 2018 in cs.CV and cs.LG

Abstract: Recent Progress has shown that exploitation of hidden layer neurons in convolution neural networks incorporating with a carefully designed activation function can yield better classification results in the field of computer vision. The paper firstly introduces a novel deep learning architecture aiming to mitigate the gradient-vanishing problem, in which the earlier hidden layer neurons could be directly connected with the last hidden layer and feed into the last layer for classification. We then design a generalized linear rectifier function as the activation function that can approximate arbitrary complex functions via training of the parameters. We will show that our design can achieve similar performance in a number of object recognition and video action benchmark tasks, under significantly less number of parameters and shallower network infrastructure, which is not only promising in training in terms of computation burden and memory usage, but is also applicable to low-computation, low-memory mobile scenarios.

Citations (3)

Summary

We haven't generated a summary for this paper yet.