Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Fitness Dependent Optimizer for Training Multi-layer Perceptron (2201.00563v1)

Published 3 Jan 2022 in cs.NE

Abstract: This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict outcomes of students. This study can improve the learning system in terms of the educational background of students besides increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.

Citations (5)

Summary

We haven't generated a summary for this paper yet.