Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
46 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
32 tokens/sec
GPT-4o
87 tokens/sec
DeepSeek R1 via Azure Premium
98 tokens/sec
GPT OSS 120B via Groq Premium
435 tokens/sec
Kimi K2 via Groq Premium
207 tokens/sec
2000 character limit reached

Zeroth-Order Non-smooth Non-convex Optimization via Gaussian Smoothing (2508.11073v1)

Published 14 Aug 2025 in math.OC

Abstract: This paper addresses stochastic optimization of Lipschitz-continuous, nonsmooth and nonconvex objectives over compact convex sets, where only noisy function evaluations are available. While gradient-free methods have been developed for smooth nonconvex problems, extending these techniques to the nonsmooth setting remains challenging. The primary difficulty arises from the absence of a Taylor series expansion for Clarke subdifferentials, which limits the ability to approximate and analyze the behavior of the objective function in a neighborhood of a point. We propose a two time-scale zeroth-order projected stochastic subgradient method leveraging Gaussian smoothing to approximate Clarke subdifferentials. First, we establish that the expectation of the Gaussian-smoothed subgradient lies within an explicitly bounded error of the Clarke subdifferential, a result that extends prior analyses beyond convex/smooth settings. Second, we design a novel algorithm with coupled updates: a fast timescale tracks the subgradient approximation, while a slow timescale drives convergence. Using continuous-time dynamical systems theory and robust perturbation analysis, we prove that iterates converge almost surely to a neighborhood of the set of Clarke stationary points, with neighborhood size controlled by the smoothing parameter. To our knowledge, this is the first zeroth-order method achieving almost sure convergence for constrained nonsmooth nonconvex optimization problems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube