Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 120 tok/s Pro
Kimi K2 166 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Optimal two-stage procedures for estimating location and size of the maximum of a multivariate regression function (1302.4561v1)

Published 19 Feb 2013 in math.ST and stat.TH

Abstract: We propose a two-stage procedure for estimating the location $\bolds{\mu}$ and size M of the maximum of a smooth d-variate regression function f. In the first stage, a preliminary estimator of $\bolds{\mu}$ obtained from a standard nonparametric smoothing method is used. At the second stage, we "zoom-in" near the vicinity of the preliminary estimator and make further observations at some design points in that vicinity. We fit an appropriate polynomial regression model to estimate the location and size of the maximum. We establish that, under suitable smoothness conditions and appropriate choice of the zooming, the second stage estimators have better convergence rates than the corresponding first stage estimators of $\bolds{\mu}$ and M. More specifically, for $\alpha$-smooth regression functions, the optimal nonparametric rates $n{-(\alpha-1)/(2\alpha+d)}$ and $n{-\alpha/(2\alpha+d)}$ at the first stage can be improved to $n{-(\alpha-1)/(2\alpha)}$ and $n{-1/2}$, respectively, for $\alpha>1+\sqrt{1+d/2}$. These rates are optimal in the class of all possible sequential estimators. Interestingly, the two-stage procedure resolves "the curse of the dimensionality" problem to some extent, as the dimension d does not control the second stage convergence rates, provided that the function class is sufficiently smooth. We consider a multi-stage generalization of our procedure that attains the optimal rate for any smoothness level $\alpha>2$ starting with a preliminary estimator with any power-law rate at the first stage.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube