Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modified swarm-based metaheuristics enhance Gradient Descent initialization performance: Application for EEG spatial filtering (1907.08220v2)

Published 13 Jun 2019 in cs.NE, cs.LG, and stat.ML

Abstract: Gradient Descent (GD) approximators often fail in the solution space with multiple scales of convexities, i.e., in subspace learning and neural network scenarios. To handle that, one solution is to run GD multiple times from different randomized initial states and select the best solution over all experiments. However, this idea is proved impractical in plenty of cases. Even Swarm-based optimizers like Particle Swarm Optimization (PSO) or Imperialistic Competitive Algorithm (ICA), as commonly used GD initializers, have failed to find optimal solutions in some applications. In this paper, Swarm-based optimizers like ICA and PSO are modified by a new optimization framework to improve GD optimization performance. This improvement is for applications with high number of convex localities in multiple scales. Performance of the proposed method is analyzed in a nonlinear subspace filtering objective function over EEG data. The proposed metaheuristic outperforms commonly used baseline optimizers as GD initializers in both the EEG classification accuracy and EEG loss function fitness. The optimizers have been also compared to each other in some of CEC 2014 benchmark functions, where again our method outperforms other algorithms.

Summary

We haven't generated a summary for this paper yet.