Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel Implementation of Distributed Global Optimization (DGO) (2012.09861v1)

Published 16 Dec 2020 in cs.DC and math.OC

Abstract: Parallel implementations of distributed global optimization (DGO) [13] on MP-1 and NCUBE parallel computers revealed an approximate O(n) increase in the performance of this algorithm. Therefore, the implementation of the DGO on parallel processors can remedy the only draw back of this algorithm which is the O(n2) of execution time as the number of the dimensions increase. The speed up factor of the parallel implementations of DGO is measured with respect to the sequential execution time of the identical problem on SPARC IV computer. The best speed up was achieved by the SIMD implementation of the algorithm on the MP-1 with the total speedup of 126 for an optimization problem with n = 9. This optimization problem was distributed across 128 PEs of Mas-Par.

Summary

We haven't generated a summary for this paper yet.