Robustness and mechanism of generative classifiers under realistic spurious correlations

Determine whether class-conditional generative classifiers are more robust to spurious correlations in realistic distribution shifts than discriminative classifiers, and explain the reasons and mechanisms underlying any observed robustness advantage.

Background

The paper discusses prior work on using deep generative models for classification and notes that, despite promising signs (e.g., effective robustness and resilience to synthetic corruptions), the field has not reached consensus on whether generative classifiers truly offer improved robustness to spurious correlations that manifest in real-world distribution shifts. The authors emphasize that past evidence can be confounded by pretraining on extra data or architecture-specific effects such as diffusion’s resilience to input perturbations, leaving the broader question unresolved.

This uncertainty motivates their comprehensive empirical study and analysis across multiple datasets and modalities, aiming to assess generative classifiers’ behavior and to understand why they might outperform discriminative approaches under distribution shift. The open problem explicitly raised relates to establishing general robustness properties and identifying the causal mechanisms behind any improvements.

References

Overall, it still remains unclear whether generative classifiers are more robust to the spurious correlations seen in realistic distribution shifts or why they might be better.

Generative Classifiers Avoid Shortcut Solutions  (2512.25034 - Li et al., 31 Dec 2025) in Section 2, Classification with Generative Models (Related Work)