Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parameters for the best convergence of an optimization algorithm On-The-Fly (2009.11390v1)

Published 23 Sep 2020 in math.OC, cs.LG, and cs.NE

Abstract: What really sparked my interest was how certain parameters worked better at executing and optimization algorithm convergence even though the objective formula had no significant differences. Thus the research question stated: 'Which parameters provides an upmost optimal convergence solution of an Objective formula using the on-the-fly method?' This research was done in an experimental concept in which five different algorithms were tested with different objective functions to discover which parameter would result well for the best convergence. To find the correct parameter a method called 'on-the-fly' was applied. I run the experiments with five different optimization algorithms. One of the test runs showed that each parameter has an increasing or decreasing convergence accuracy towards the subjective function depending on which specific optimization algorithm you choose. Each parameter has an increasing or decreasing convergence accuracy toward the subjective function. One of the results in which evolutionary algorithm was applied with only the recombination technique did well at finding the best optimization. As well that some results have an increasing accuracy visualization by combing mutation or several parameters in one test performance. In conclusion, each algorithm has its own set of the parameter that converge differently. Also depending on the target formula that is used. This confirms that the fly method a suitable approach at finding the best parameter. This means manipulations and observe the effects in process to find the right parameter works as long as the learning cost rate decreases over time.

Summary

We haven't generated a summary for this paper yet.