Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Global Optimization (DGO) (2012.09252v1)

Published 16 Dec 2020 in cs.DC and math.OC

Abstract: A new technique of global optimization and its applications in particular to neural networks are presented. The algorithm is also compared to other global optimization algorithms such as Gradient descent (GD), Monte Carlo (MC), Genetic Algorithm (GA) and other commercial packages. This new optimization technique proved itself worthy of further study after observing its accuracy of convergence, speed of convergence and ease of use. Some of the advantages of this new optimization technique are listed below: 1. Optimizing function does not have to be continuous or differentiable. 2. No random mechanism is used, therefore this algorithm does not inherit the slow speed of random searches. 3. There are no fine-tuning parameters (such as the step rate of G.D. or temperature of S.A.) needed for this technique. 4. This algorithm can be implemented on parallel computers so that there is little increase in computation time (compared to linear increase) as the number of dimensions increases. The time complexity of O(n) is achieved.

Citations (10)

Summary

We haven't generated a summary for this paper yet.