Dice Question Streamline Icon: https://streamlinehq.com

Quantifying uncertainty in deep learning

Develop reliable and well-calibrated methods to quantify uncertainty in deep learning models so that uncertainty-based active learning strategies can be effective in practice, including for training graph neural networks that predict gene expression responses to perturbations such as GEARS.

Information Square Streamline Icon: https://streamlinehq.com

Background

In the experimental analysis, the authors evaluate uncertainty-based active learning for training GEARS, a graph neural network that predicts post-perturbation gene expression. They find that an uncertainty-driven selection strategy performs poorly, especially in the early cycles when models are weakly trained and randomly initialized.

They attribute this, in part, to the broader challenge of obtaining trustworthy uncertainty estimates in deep learning, noting that this is a recognized unresolved issue. This motivates the need for principled uncertainty quantification methods that can guide data acquisition decisions reliably in such settings.

References

On the one hand, this stems from the well-known open problem of quantifying uncertainty in deep learning .

Data Filtering for Genetic Perturbation Prediction (2503.14571 - Panagopoulos et al., 18 Mar 2025) in Section 4.2 (Results)