Papers
Topics
Authors
Recent
Search
2000 character limit reached

Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures

Published 1 May 2025 in cs.RO, cs.LG, cs.SY, and eess.SY | (2505.00779v1)

Abstract: Recent advances in generative world models have enabled classical safe control methods, such as Hamilton-Jacobi (HJ) reachability, to generalize to complex robotic systems operating directly from high-dimensional sensor observations. However, obtaining comprehensive coverage of all safety-critical scenarios during world model training is extremely challenging. As a result, latent safety filters built on top of these models may miss novel hazards and even fail to prevent known ones, overconfidently misclassifying risky out-of-distribution (OOD) situations as safe. To address this, we introduce an uncertainty-aware latent safety filter that proactively steers robots away from both known and unseen failures. Our key idea is to use the world model's epistemic uncertainty as a proxy for identifying unseen potential hazards. We propose a principled method to detect OOD world model predictions by calibrating an uncertainty threshold via conformal prediction. By performing reachability analysis in an augmented state space-spanning both the latent representation and the epistemic uncertainty-we synthesize a latent safety filter that can reliably safeguard arbitrary policies from both known and unseen safety hazards. In simulation and hardware experiments on vision-based control tasks with a Franka manipulator, we show that our uncertainty-aware safety filter preemptively detects potential unsafe scenarios and reliably proposes safe, in-distribution actions. Video results can be found on the project website at https://cmu-intentlab.github.io/UNISafe

Summary

Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures

The paper presents an innovative approach to safely control robotic systems operating in complex environments by introducing uncertainty-aware latent safety filters leveraging generative world models. The main challenge addressed is that even sophisticated world models may miss novel hazards during training due to limited coverage, leading to overconfident misclassification of risky out-of-distribution (OOD) scenarios as safe. The proposed methodology introduces principles to detect OOD world model predictions, utilizing the world model's epistemic uncertainty as a proxy for unseen potential hazards, and performs reachability analysis in an augmented state space that spans both the latent representation and epistemic uncertainty.

Key Contributions

  1. Uncertainty-aware Safety Filters: The paper proposes safety filters that utilize the epistemic uncertainty from world models for identifying potential OOD failures. By calibrating an uncertainty threshold using conformal prediction, the research establishes a reliable mechanism to steer robots away from both known dangers and unforeseen hazards effectively.

  2. Augmented Reachability Analysis: Traditional reachability methods rely on assumed perfect dynamics models, which fails in OOD scenarios. This paper leverages epistemic uncertainty within an augmented latent space, allowing standard Hamilton-Jacobi reachability analysis to synthesize safety filters that consider both known and OOD safety hazards.

  3. Experimental Validation: Through simulations and hardware experiments with a vision-based Franka manipulator, the paper demonstrates practical deployment where the approach successfully predicts and avoids unsafe scenarios, presenting reliable, in-distribution actions preemptively.

Impact and Implications

This framework significantly impacts the field of robotics and AI safety, providing a more resilient approach to managing high-dimensional control tasks directly from sensor inputs. The proposed filter mechanism builds upon classical control and modern machine learning, demonstrating effective handling of OOD conditions that challenge existing robot safety paradigms.

Future Directions

The paper suggests that future work can delve deeper into system-level assurances of failure detection mechanisms leveraging quantified epistemic uncertainty, further solidifying safety constraints in complex, real-world robotic applications. Moreover, extending this concept to actively learning environments through safe exploration strategies could further enhance world model generalization and robustness.

Conclusion

This research substantiates the necessity for advanced safety mechanisms in adaptive robotics, recognizing epistemic uncertainty as a critical component in safeguarding against unseen hazards. The presented framework and experiments convincingly illustrate the capability to preemptively act against both known and unexpected dangers, thereby fortifying robotics systems' operational safety in dynamic, unpredictable environments.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 37 likes about this paper.