Dice Question Streamline Icon: https://streamlinehq.com

Verify calibration benefits under alternative Riesz losses

Determine whether replacing the Riesz regression loss with other Bregman–Riesz regression losses (such as those induced by Kullback–Leibler, negative binomial, or Itakura–Saito divergences) in the calibration step for cross-fitted Riesz representer estimators preserves the theoretical benefits of calibration, including bias reduction and efficiency, when the calibration function is learned in the evaluation sample.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper discusses recent work that uses the Riesz regression loss to calibrate Riesz representer estimates, aiming to reduce bias in estimators that rely on cross-fitted Riesz representer estimates. In the described procedure, the data is split into training and evaluation samples; a Riesz representer is learned on the training sample, and an isotonic calibration function is then learned on the evaluation sample using Riesz regression.

The authors note that, in principle, different Riesz losses—such as those arising from their proposed Bregman–Riesz regression framework—could be used in place of the standard Riesz regression loss during calibration. However, they explicitly state that additional verification is needed to ensure that the theoretical benefits associated with calibration are retained under such substitutions.

References

In addition to Riesz regression, recent work has proposed using the Riesz regression loss to calibrate Riesz representer estimates in order to reduce bias in estimators that rely on cross-fitted Riesz representer estimates. For instance, consider that the data sample is split into a Riesz representer training sample and an evaluation sample. The uncalibrated procedure learns \hat{\alpha} in the training sample, then evaluates the estimator in the evaluation sample using \hat{\alpha}. The calibrated procedure still learns \hat{\alpha} in the training sample, but instead evaluates the estimator in the evaluation sample using \hat{\alpha}* = \hat{f} \circ \hat{\alpha}, where \hat{f}: \mathbb{R} \to \mathbb{R} is an isotonic (monotonic) calibration function that is learned in the evaluation sample by Riesz regression. In principle, different Riesz losses could be considered for the calibration step, such as those described in the current paper; however, more work is required to verify that the theoretical benefits of calibration are maintained.

Learning density ratios in causal inference using Bregman-Riesz regression (2510.16127 - Hines et al., 17 Oct 2025) in Section 4 (Related work), Calibration paragraph