Sharp Analysis of a Simple Model for Random Forests

7 May 2018  ·  Jason M. Klusowski ·

Random forests have become an important tool for improving accuracy in regression and classification problems since their inception by Leo Breiman in 2001. In this paper, we revisit a historically important random forest model originally proposed by Breiman in 2004 and later studied by G\'erard Biau in 2012, where a feature is selected at random and the splits occurs at the midpoint of the node along the chosen feature. If the regression function is Lipschitz and depends only on a small subset of $ S $ out of $ d $ features, we show that, given access to $ n $ observations and properly tuned split probabilities, the mean-squared prediction error is $ O((n(\log n)^{(S-1)/2})^{-\frac{1}{S\log2+1}}) $. This positively answers an outstanding question of Biau about whether the rate of convergence for this random forest model could be improved. Furthermore, by a refined analysis of the approximation and estimation errors for linear models, we show that this rate cannot be improved in general. Finally, we generalize our analysis and improve extant prediction error bounds for another random forest model in which each tree is constructed from subsampled data and the splits are performed at the empirical median along a chosen feature.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here