2000 character limit reached
Near-Optimal Linear Regression under Distribution Shift (2106.12108v1)
Published 23 Jun 2021 in cs.LG and stat.ML
Abstract: Transfer learning is essential when sufficient data comes from the source domain, with scarce labeled data from the target domain. We develop estimators that achieve minimax linear risk for linear regression problems under distribution shift. Our algorithms cover different transfer learning settings including covariate shift and model shift. We also consider when data are generated from either linear or general nonlinear models. We show that linear minimax estimators are within an absolute constant of the minimax risk even among nonlinear estimators for various source/target distributions.