Dice Question Streamline Icon: https://streamlinehq.com

Cause of small-sample gains from short- and pooled-stacking in DDML

Establish whether the observed small-sample performance improvement of double/debiased machine learning (DDML) with short-stacking and pooled stacking relative to DDML with conventional stacking is caused by the imposition of common stacking weights across cross-fitting folds.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper proposes pairing DDML with stacking and introduces two variants—short-stacking and pooled stacking—that leverage cross-fitting structure. Unlike conventional stacking, which estimates separate stacking weights for each cross-fitting fold, both short-stacking and pooled stacking impose common stacking weights across folds.

In calibrated simulations for very small samples, short-stacking and pooled stacking outperform conventional stacking. The authors conjecture that the gains arise from imposing common weights across folds, but this mechanism is not formally established, motivating a precise theoretical investigation.

References

We conjecture the improvement is due to short and pooled stacking imposing common weights across cross-fitting folds.

Model Averaging and Double Machine Learning (2401.01645 - Ahrens et al., 3 Jan 2024) in Section 4.2 (DDML and Stacking in Very Small Samples), concluding paragraph