Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 35 tok/s Pro
2000 character limit reached

Sequential test sampling for stochastic derivative-free optimization (2509.14505v1)

Published 18 Sep 2025 in math.OC

Abstract: In many derivative-free optimization algorithms, a sufficient decrease condition decides whether to accept a trial step in each iteration. This condition typically requires that the potential objective function value decrease of the trial step, i.e., the true reduction in the objective function value that would be achieved by moving from the current point to the trial point, be larger than a multiple of the squared stepsize. When the objective function is stochastic, evaluating such a condition accurately can require a large estimation cost. In this paper, we frame the evaluation of the sufficient decrease condition in a stochastic setting as a hypothesis test problem and solve it through a sequential hypothesis test. The two hypotheses considered in the problem correspond to accepting or rejecting the trial step. This test sequentially collects noisy sample observations of the potential decrease until their sum crosses either a lower or an upper boundary depending on the noise variance and the stepsize. When the noise of observations is Gaussian, we derive a novel sample size result, showing that the effort to evaluate the condition explicitly depends on the potential decrease, and that the sequential test terminates early whenever the sufficient decrease condition is away from satisfaction. Furthermore, when the potential decrease is~$\Theta(\deltar)$ for some~$r\in(0,2]$, the expected sample size decreases from~$\Theta(\delta{-4})$ to~$O(\delta{-2-r})$. We apply this sequential test sampling framework to probabilistic-descent direct search. To analyze its convergence rate, we extend a renewal-reward supermartingale-based convergence rate analysis framework to an arbitrary probability threshold. By doing so, we are able to show that probabilistic-descent direct search has an iteration complexity of $O(n/\epsilon2)$ for gradient norm...

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 1 like.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube