Granule Density Outlier Factor (GDOF)
- GDOF is a density-based outlier detection framework that uses fuzzy granulation and multiscale analysis to identify anomalies in both homogeneous and mixed datasets.
- It combines attribute-level fuzzy similarity with density estimates to compute an interpretable outlier score sensitive to local and global sparsity.
- The method supports unsupervised and semi-supervised regimes and achieves state-of-the-art performance on diverse benchmark datasets.
The Granule Density-based Outlier Factor (GDOF) is a flexible and theoretically grounded framework for outlier detection that integrates fuzzy set-based granulation, density estimation, and multiscale ensemble strategies. GDOF systematically combines attribute-level fuzzy granules to identify samples in locally or globally sparse regions of the data, thereby flagging them as potential outliers. The method supports both unsupervised and semi-supervised regimes, natively handles heterogeneous and mixed-type attributes, and achieves state-of-the-art accuracy across a variety of domains (Gao et al., 6 Jan 2025, Chen et al., 21 Dec 2025).
1. Mathematical Foundations of GDOF
Consider an information system or dataset with samples and attribute set . GDOF builds on fuzzy rough set theory, representing each sample by a vector of fuzzy similarities and estimating its density relative to the remainder of the data.
Fuzzy Similarity:
Given attribute and normalized values , the fuzzy similarity is typically defined as
- For numerical : (if ), otherwise $0$, where .
- For categorical : if , $0$ otherwise.
Fuzzy Granule and Density:
The fuzzy granule of under is the -vector: , with granule cardinality and normalized density .
Relative Density Adjustment:
For local density adaptation, with . The adjusted similarity is .
Attribute Set Conjunction and Significance:
For , define combined similarity via the conjunction: , and cardinals . The granulation significance is .
GDOF Outlier Score:
Given a chain sorted by descending , define the Granule Density-based Outlier Factor (“GDOF”): A higher indicates higher likelihood of being an outlier (Gao et al., 6 Jan 2025).
2. Algorithmic Workflow and Multiscale Integration
GDOF’s core is extensible, supporting multiscale, ensemble-based outlier detection via granular-ball decomposition and view fusion. The generalized algorithm involves:
- Multi-Scale View Generation:
- Start from the finest partition (each point is a granular-ball).
- Iteratively merge balls based on fuzzy similarity until a single ball remains.
- Each partition at a granularity level forms a scale , denoted .
- Within-Scale Scoring:
- For each granular-ball, treat as a super-sample; compute GDOF scores for all constituent samples.
- Scores are mapped to probabilities via a two-sided linear transform.
- Ensemble Fusion and Thresholding:
- Fuse view-specific probabilities: , where view weights (binary entropy).
- Three-way decision partition:
- remainder
- SVM-Based Refinement:
- Train a weighted SVM on POS (outlier) and NEG (inlier) with sample weights .
- Platt-scale SVM outputs for BND to deliver final outlier probabilities (Gao et al., 6 Jan 2025).
Pseudocode encapsulating this workflow:
1 2 3 4 5 6 7 8 9 10 11 |
Input: U={x₁…xₙ}, A, λ, δ, t
1. GBSV = generateGranularBalls(U, A, δ)
2. For each scale k:
Compute Sₖ(x) for all x using density-enhanced granules
Map Sₖ to Pₖ(x)
νₖ = 1 – (1/n) ∑ₓ H(Pₖ(x))
3. Fuse: P(x) = ∑ₖ νₖPₖ(x)/∑ₖνₖ
Compute thresholds α, β; assign POS, NEG, BND
4. Train SVM on POS(+1), NEG(–1), sample weights μ(x)
5. For x in BND, Platt-scale SVM outputs → ŴP(x)
Output: ŴP(x) |
A more attribute-centric, label-informed GDOF is developed in (Chen et al., 21 Dec 2025), optimizing per-attribute fuzzy radii for discrimination between (few) labeled outliers and (sampled) inliers, and forming an outlier score as a weighted sum over attributes’ granule densities.
3. Computational Complexity and Parameterization
The time and space complexity are governed by pairwise operations and the number of attributes:
- Single-view FRS+GDOF: (mainly from similarity and granule construction)
- Multi-scale granular-ball generation: ; number of scales empirically
- SVM refinement: (for SMO-like solvers)
- Overall: time, memory
Parameterization:
- regulates the neighborhood window
- tunes the impact of local density contrast in similarity modulation
- Thresholds and margin parameter define the three-way division
- For label-informed GDOF, per-attribute are optimized to enhance density separation between outliers and inliers (Gao et al., 6 Jan 2025, Chen et al., 21 Dec 2025)
In practice, sparsity in the similarity matrices and small offer further computational savings.
4. Illustrative Example
Consider five 1D samples normalized to : . Choose (so is tuned accordingly):
- Compute :
Fill the similarity matrix, e.g., .
- Granule cardinality:
, then .
- Relative density:
E.g., .
- Adjusted similarity and GDOF score:
; single-attribute significance ; final score, e.g., .
5. Theoretical Properties and Interpretability
GDOF provides principled density estimates both globally and locally:
- In dense clusters, for all with , (Prop. 1, (Chen et al., 21 Dec 2025)).
- Adding a distant point to a neighborhood reduces for points in the cluster (Prop. 2), so GDOF is sensitive to sparsity increases.
GDOF is thus well-aligned with classical notions of density-based outliers, with the added advantage of attribute-wise decomposition and the ability to natively process heterogeneity and fuzziness.
A plausible implication is that GDOF enables interpretability: attributes with high discriminatory power for outliers are explicitly weighted, and outlier scores are directly connected to fuzzy local densities. Multiscale and ensemble aspects further mitigate sensitivity to scale and cluster structure.
6. Empirical Performance
On 20 benchmark datasets from UCI, MVTec-AD, and OD-bench, GDOF and its multiscale variants consistently achieve state-of-the-art performance:
- Unsupervised/multiscale GDOF (Gao et al., 6 Jan 2025):
- Average AUROC
- Outperforms single-view FRS (), LOF (), kNN (), CBLOF (), isolation forest ()
- Minimum 8.5% AUROC gain over the best non-ensemble baseline (Friedman+Nemenyi )
- Label-informed GDOF (Chen et al., 21 Dec 2025):
- Mean AUC , best competitor
- In mixed/categorical datasets: +10–15% AUC versus state of the art
- AP rises from $0.542$ (next best) to $0.594$
- Performance is robust as the number of pseudo-inliers varies (50–500); label-efficiency is high with gains saturating after 5–30 labeled outliers
These results confirm GDOF’s strong empirical validity for both classical and challenging mixed-type datasets.
7. Extensions: Heterogeneous and Label-Informed GDOF
GDOF extends to heterogeneous data by granularizing each attribute according to its type and optimizing fuzzy radius for best separation between a small set of labeled outliers and putative or given inliers. The final outlier score is a weighted sum of attribute densities, where attribute weights are tied to their relevance (difference of average density for inliers minus outliers) (Chen et al., 21 Dec 2025).
A plausible implication is that GDOF adapts gracefully to domains with mixed numerical, ordinal, and categorical data, and leverages modest amounts of labeled anomaly data to prioritize informative features. Negative sampling strategies allow usage in settings where inlier labels are scarce or absent.
References:
- "Fuzzy Granule Density-Based Outlier Detection with Multi-Scale Granular Balls" (Gao et al., 6 Jan 2025)
- "Label-Informed Outlier Detection Based on Granule Density" (Chen et al., 21 Dec 2025)