Conjecture: Constancy of covariance-based degrees of freedom with ensemble size due to self-influence weights
Ascertain whether, in bagged ensembles of decision trees, the expected self-influence weight of a training point on its own prediction s^i(x_i) is invariant in the number of trees B, implying that the covariance-based degrees of freedom df(\hat{f}) remains constant as B increases, while increased randomness-induced smoothing across other training labels as B grows impacts prediction performance despite df(\hat{f}) being unchanged.
References
We conjecture that this could be because the expected dependence of a train-input prediction on its own train-time label is independent of the number of trees used (i.e. the expected value of si(x_i), which determines df(\hat{f}), is constant across B) -- this is what df(\hat{f}) captures. Yet, there can be more smoothing across all other training labels as B grows (i.e. more uniform expected sj(x_i), j\neq i) due to the randomness induced by bootstrapping -- which may impact prediction performance.