Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comparative Performance Analysis of Neural Networks Architectures on H2O Platform for Various Activation Functions (1707.04940v1)

Published 16 Jul 2017 in cs.LG, cs.CV, and cs.PF

Abstract: Deep learning (deep structured learning, hierarchi- cal learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high- level abstractions in data by using multiple processing layers with complex structures or otherwise composed of multiple non-linear transformations. In this paper, we present the results of testing neural networks architectures on H2O platform for various activation functions, stopping metrics, and other parameters of machine learning algorithm. It was demonstrated for the use case of MNIST database of handwritten digits in single-threaded mode that blind selection of these parameters can hugely increase (by 2-3 orders) the runtime without the significant increase of precision. This result can have crucial influence for opitmization of available and new machine learning methods, especially for image recognition problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yuriy Kochura (8 papers)
  2. Sergii Stirenko (19 papers)
  3. Yuri Gordienko (21 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.