Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 189 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Accelerating Derivative-Free Optimization with Dimension Reduction and Hyperparameter Learning (2101.07444v1)

Published 19 Jan 2021 in math.OC

Abstract: We consider convex, black-box objective functions with additive or multiplicative noise with a high-dimensional parameter space and a data space of lower dimension, where gradients of the map exist, but may be inaccessible. We investigate Derivative-Free Optimization (DFO) in this setting and propose a novel method, Active STARS (ASTARS), based on STARS (Chen and Wild, 2015) and dimension reduction in parameter space via Active Subspace (AS) methods (Constantine, 2015). STARS hyperparmeters are inversely proportional to the known dimension of parameter space, resulting in heavy smoothing and small step sizes for large dimensions. When possible, ASTARS leverages a lower-dimensional AS, defining a set of directions in parameter space causing the majority of the variance in function values. ASTARS iterates are updated with steps only taken in the AS, reducing the value of the objective function more efficiently than STARS, which updates iterates in the full parameter space. Computational costs may be reduced further by learning ASTARS hyperparameters and the AS, reducing the total evaluations of the objective function and eliminating the requirement that the user specify hyperparameters, which may be unknown in our setting. We call this method Fully Automated ASTARS (FAASTARS). We show that STARS and ASTARS will both converge -- with a certain complexity -- even with inexact, estimated hyperparemters. We also find that FAASTARS converges with the use of estimated AS's and hyperparameters. We explore the effectiveness of ASTARS and FAASTARS in numerical examples which compare ASTARS and FAASTARS to STARS.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.