Verify calibration benefits under alternative Riesz losses
Determine whether replacing the Riesz regression loss with other Bregman–Riesz regression losses (such as those induced by Kullback–Leibler, negative binomial, or Itakura–Saito divergences) in the calibration step for cross-fitted Riesz representer estimators preserves the theoretical benefits of calibration, including bias reduction and efficiency, when the calibration function is learned in the evaluation sample.
References
In addition to Riesz regression, recent work has proposed using the Riesz regression loss to calibrate Riesz representer estimates in order to reduce bias in estimators that rely on cross-fitted Riesz representer estimates. For instance, consider that the data sample is split into a Riesz representer training sample and an evaluation sample. The uncalibrated procedure learns \hat{\alpha} in the training sample, then evaluates the estimator in the evaluation sample using \hat{\alpha}. The calibrated procedure still learns \hat{\alpha} in the training sample, but instead evaluates the estimator in the evaluation sample using \hat{\alpha}* = \hat{f} \circ \hat{\alpha}, where \hat{f}: \mathbb{R} \to \mathbb{R} is an isotonic (monotonic) calibration function that is learned in the evaluation sample by Riesz regression. In principle, different Riesz losses could be considered for the calibration step, such as those described in the current paper; however, more work is required to verify that the theoretical benefits of calibration are maintained.