Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Derivative-Free Optimization via Adaptive Sampling Strategies (2404.11893v1)

Published 18 Apr 2024 in math.OC

Abstract: In this paper, we present a novel derivative-free optimization framework for solving unconstrained stochastic optimization problems. Many problems in fields ranging from simulation optimization to reinforcement learning involve settings where only stochastic function values are obtained via an oracle with no available gradient information, necessitating the usage of derivative-free optimization methodologies. Our approach includes estimating gradients using stochastic function evaluations and integrating adaptive sampling techniques to control the accuracy in these stochastic approximations. We consider various gradient estimation techniques including standard finite difference, Gaussian smoothing, sphere smoothing, randomized coordinate finite difference, and randomized subspace finite difference methods. We provide theoretical convergence guarantees for our framework and analyze the worst-case iteration and sample complexities associated with each gradient estimation method. Finally, we demonstrate the empirical performance of the methods on logistic regression and nonlinear least squares problems.

Citations (3)

Summary

We haven't generated a summary for this paper yet.