GLIP-OOD: Zero-Shot Graph OOD Detection with Graph Foundation Model (2504.21186v2)
Abstract: Out-of-distribution (OOD) detection is critical for ensuring the safety and reliability of machine learning systems, particularly in dynamic and open-world environments. In the vision and text domains, zero-shot OOD detection - which requires no training on in-distribution (ID) data - has advanced significantly through the use of large-scale pretrained models, such as vision-LLMs (VLMs) and LLMs. However, zero-shot OOD detection in graph-structured data remains largely unexplored, primarily due to the challenges posed by complex relational structures and the absence of powerful, large-scale pretrained models for graphs. In this work, we take the first step toward enabling zero-shot graph OOD detection by leveraging a graph foundation model (GFM). Our experiments show that, when provided only with class label names for both ID and OOD categories, the GFM can effectively perform OOD detection - often surpassing existing "supervised" OOD detection methods that rely on extensive labeled node data. We further address the practical scenario in which OOD label names are not available in real-world settings by introducing GLIP-OOD, a framework that uses LLMs to generate semantically informative pseudo-OOD labels from unlabeled data. These generated OOD labels allow the GFM to better separate ID and OOD classes, facilitating more precise OOD detection - all without any labeled nodes (only ID label names). To our knowledge, this is the first approach to achieve node-level graph OOD detection in a fully zero-shot setting, and it attains performance comparable to state-of-the-art supervised methods on four benchmark text-attributed graph datasets.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.