Adaptive estimation of linear functionals in functional linear models
Abstract: We consider the estimation of the value of a linear functional of the slope parameter in functional linear regression, where scalar responses are modeled in dependence of random functions. In Johannes and Schenk [2010] it has been shown that a plug-in estimator based on dimension reduction and additional thresholding can attain minimax optimal rates of convergence up to a constant. However, this estimation procedure requires an optimal choice of a tuning parameter with regard to certain characteristics of the slope function and the covariance operator associated with the functional regressor. As these are unknown in practice, we investigate a fully data-driven choice of the tuning parameter based on a combination of model selection and Lepski's method, which is inspired by the recent work of Goldenshluger and Lepski [2011]. The tuning parameter is selected as the minimizer of a stochastic penalized contrast function imitating Lepski's method among a random collection of admissible values. We show that this adaptive procedure attains the lower bound for the minimax risk up to a logarithmic factor over a wide range of classes of slope functions and covariance operators. In particular, our theory covers point-wise estimation as well as the estimation of local averages of the slope parameter.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.