Papers
Topics
Authors
Recent
2000 character limit reached

Linear regression through PAC-Bayesian truncation (1010.0072v2)

Published 1 Oct 2010 in math.ST and stat.TH

Abstract: We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression under L\infty constraints on the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log(n) factor, complex constants involving the conditioning of the Gram matrix of the covariates, kurtosis coefficients or some geometric quantity characterizing the relation between L2 and L\infty-balls and require some additional assumptions like exponential moments of the output. This work provides a PAC-Bayesian shrinkage procedure with a simple excess risk bound of order d/n holding in expectation and in deviations, under various assumptions. The common surprising factor of these results is their simplicity and the absence of exponential moment condition on the output distribution while achieving exponential deviations. The risk bounds are obtained through a PAC-Bayesian analysis on truncated differences of losses. We also show that these results can be generalized to other strongly convex loss functions.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.