Papers
Topics
Authors
Recent
Search
2000 character limit reached

A new Sparse Auto-encoder based Framework using Grey Wolf Optimizer for Data Classification Problem

Published 29 Jan 2022 in cs.NE, cs.AI, and cs.LG | (2201.12493v1)

Abstract: One of the most important properties of deep auto-encoders (DAEs) is their capability to extract high level features from row data. Hence, especially recently, the autoencoders are preferred to be used in various classification problems such as image and voice recognition, computer security, medical data analysis, etc. Despite, its popularity and high performance, the training phase of autoencoders is still a challenging task, involving to select best parameters that let the model to approach optimal results. Different training approaches are applied to train sparse autoencoders. Previous studies and preliminary experiments reveal that those approaches may present remarkable results in same problems but also disappointing results can be obtained in other complex problems. Metaheuristic algorithms have emerged over the last two decades and are becoming an essential part of contemporary optimization techniques. Gray wolf optimization (GWO) is one of the current of those algorithms and is applied to train sparse auto-encoders for this study. This model is validated by employing several popular Gene expression databases. Results are compared with previous state-of-the art methods studied with the same data sets and also are compared with other popular metaheuristic algorithms, namely, Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC). Results reveal that the performance of the trained model using GWO outperforms on both conventional models and models trained with most popular metaheuristic algorithms.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.