Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharp Analysis of a Simple Model for Random Forests (1805.02587v7)

Published 7 May 2018 in stat.ML and cs.LG

Abstract: Random forests have become an important tool for improving accuracy in regression and classification problems since their inception by Leo Breiman in 2001. In this paper, we revisit a historically important random forest model originally proposed by Breiman in 2004 and later studied by G\'erard Biau in 2012, where a feature is selected at random and the splits occurs at the midpoint of the node along the chosen feature. If the regression function is Lipschitz and depends only on a small subset of $ S $ out of $ d $ features, we show that, given access to $ n $ observations and properly tuned split probabilities, the mean-squared prediction error is $ O((n(\log n){(S-1)/2}){-\frac{1}{S\log2+1}}) $. This positively answers an outstanding question of Biau about whether the rate of convergence for this random forest model could be improved. Furthermore, by a refined analysis of the approximation and estimation errors for linear models, we show that this rate cannot be improved in general. Finally, we generalize our analysis and improve extant prediction error bounds for another random forest model in which each tree is constructed from subsampled data and the splits are performed at the empirical median along a chosen feature.

Citations (21)

Summary

We haven't generated a summary for this paper yet.