Evidential Inter-Intra Fusion (EIF)
- Evidential Inter-Intra Fusion (EIF) is a framework that integrates heterogeneous evidence using DS, Dirichlet, and NIG distributions to explicitly quantify uncertainty in both intra- and inter-fusion stages.
- It combines multi-scale, multi-branch information across various domains such as occupancy grids, stereo matching, gaze regression, fake news detection, and intrusion detection using principled fusion rules.
- EIF improves robustness and explainability by employing uncertainty-aware loss functions and evidence fusion techniques, resulting in enhanced performance across challenging real-world applications.
Evidential Inter-Intra Fusion (EIF) is a principled framework for integrating heterogeneous sources of evidence or multi-level models under explicit uncertainty quantification using formal theories such as Dempster-Shafer (DS), Dirichlet, and Normal-Inverse-Gamma (NIG) distributions. EIF orchestrates fusion both within a source/model (intra-fusion, e.g. multi-scale, multi-branch, or multi-local regressors) and across sources/models (inter-fusion, e.g. multiple datasets, views, sensors, or modalities) with principled evidence combination rules. EIF architectures have emerged in occupancy grid fusion for cooperative autonomous vehicles, stereo matching, cross-dataset regression, explainable fake-news detection, and multi-sensor intrusion detection (Kempen et al., 2023, Lou et al., 2023, Wang et al., 2024, Dong et al., 2024, Sahu et al., 2021).
1. Theoretical Foundations
EIF formalizes sources of information as belief assignments (DS masses, Dirichlet/nig parameters) expressing uncertainty about latent states or regression targets.
In occupancy grid mapping, each grid cell is associated with a mass function over ("free", "occupied"), subject to with as the uncertainty mass (Kempen et al., 2023). DS belief (), plausibility (), and pignistic probability () are derived via standard transforms.
Evidential regression employs the normal-inverse-gamma (NIG) distribution as a conjugate prior over target mean and variance (Lou et al., 2023, Wang et al., 2024). The posterior NIG parameters encode both aleatoric () and epistemic () uncertainties.
For classifier-based fusion (intrusion detection), probabilities from ML classifiers are mapped to DS mass functions, and intra/inter fusion is performed using DS rules, eg:
- Dempster’s normalized conjunctive rule
- Disjunctive rule (for insufficient trust)
- Cautious rule (least-committed combination) (Sahu et al., 2021)
2. Intra-Fusion Mechanisms
Intra-fusion synthesizes multiple sources within a given context (scale, location, data partition).
- Stereo matching: ELFNet predicts evidential distributions (NIG) at three scales of cost volume. Intra-fusion is performed using the MoNIG rule:
Both means and uncertainties are evidence-weight averaged (Lou et al., 2023).
- Cross-dataset regression: Each branch is partitioned into overlapping label subspaces, with local regressors trained on subsets. Intra-fusion (MoNIG) fuses local NIG heads per dataset (Wang et al., 2024). Local experts specialize to gaze intervals; overlap coefficients ensure robustness.
- Fake news: Divergence selection identifies top- conflicting articles within the relevant news set; intra-fused features represent maximally divergent evidence (Dong et al., 2024).
- Intrusion detection: Intra-domain fusion merges evidence across locations for the same physical/cyber domain using DS rules (Sahu et al., 2021).
3. Inter-Fusion Strategies
Inter-fusion aggregates evidence across sources, modalities, datasets, or model branches.
- Occupancy grids: Two AV OGMs, after pose normalization, are fused cell-wise using Dempster’s rule (Kempen et al., 2023):
Deep CNNs jointly solve for both registration and fusion.
- Stereo/disparity: ELFNet fuses local (cost-volume) and global (transformer, STTR) NIG branches via MoNIG, yielding unified evidential predictions (Lou et al., 2023).
- Cross-dataset regression: All single-dataset branches and the cross-dataset branch outputs are inter-fused via MoNIG to synthesize cross-domain estimates (Wang et al., 2024).
- Fake news: EMIF concatenates inter-source (co-attention of comments/news) and intra-source (divergent relevant news selection) features, penalizing inconsistency with KL-divergence, before final prediction (Dong et al., 2024).
- Intrusion detection: Across physical and cyber domains, mass functions from both are fused by DS rule, followed by aggregation across sensor locations (Sahu et al., 2021).
4. Loss Functions and Training Protocols
EIF frameworks deploy uncertainty-aware losses to calibrate model confidence.
- Occupancy grid fusion: The per-cell loss is
with an occupation-weight for class imbalance (Kempen et al., 2023).
- Evidential regression (NIG): Training loss combines negative log-model evidence and a regularizer:
where
(Lou et al., 2023, Wang et al., 2024).
- EMIF: KL-divergence inconsistency loss
and standard cross-entropy, weighted by (Dong et al., 2024).
- Intrusion detection: Multi-objective GA optimizes three metrics for feature selection: , , and , minimizing error against true labels (Sahu et al., 2021).
5. Architectural Realizations
EIF is instantiated via various deep architectures:
- Occupancy grid: DeepLabV3+ ResNet-50 backbone, four-channel input (masses per class), ASPP, evidential output heads; “one-pass” registration/fusion (Kempen et al., 2023).
- Stereo matching: Cost-volume pyramids (multi-scale), STTR transformer branch, “trustworthy regression” evidential heads, serial intra/inter MoNIG modules (Lou et al., 2023).
- Cross-dataset gaze: Modular branches per-source, local regressors for overlapping subspaces, shared backbone, high-level MFF fusion modules for cross-branch mixing (Wang et al., 2024).
- Fake news: Bi-LSTM encoders, word-level attention, co-attention blocks, divergence selection, KL-consistency, final concatenation before classification (Dong et al., 2024).
- Intrusion detection: Ensemble classifiers per location/domain, feature selection via NSGA-2, multi-rule mass combination; flexible fusion scheme (location/domain hierarchy) (Sahu et al., 2021).
6. Experimental Evaluation
EIF consistently improves accuracy, generalization, and robustness across domains.
| Domain & Paper | Baseline Accuracy | EIF Accuracy | Key Gains |
|---|---|---|---|
| Occupancy grid (Kempen et al., 2023) | Dice_occ: 0.944 (misalign) | 0.948 | +4.5% Dice, half KLD at 5m/20° |
| Stereo (Lou et al., 2023) | EPE: 0.42 px | 0.33 px | Outperforms STTR/PCWNet; SOTA |
| Gaze (Wang et al., 2024) | Unseen: 7.20° | 6.58° | −0.62° avg error cross-domain |
| Fake news (Dong et al., 2024) | F1: 80.3% | 84.7% | +4.4% F1, robust to source drop |
| Intrusion (Sahu et al., 2021) | DT+RF: 96-97% | +2–3 points | Disjunctive > conjunctive > cautious |
EIF’s robustness against noise, misalignment, and dataset shift is consistently validated via ablations demonstrating that both inter- and intra-fusion, plus explicit evidence/uncertainty modeling, are essential. In occupancy fusion, the deep CNN outperforms rule-based alignment and DS fusion for up to 5 m and 20° pose noise. In gaze estimation, MoNIG fusion across overlapping and cross-domain experts lowers error even for unseen domains. In fake news EMIF, explainability and resilience are gained by fusing comment/news co-attention and divergent external articles.
7. Applications and Significance
EIF is applicable to high-stakes scenarios demanding trustworthiness, explainability, and uncertainty calibration.
- Cooperative vehicles: Real-time evidential OGM fusion supports digital twin creation for C-ITS, improving safety under significant pose error (Kempen et al., 2023).
- Computer vision: Stereo disparity estimation with quantified uncertainties enables confidence-aware depth for downstream robotic/perception applications (Lou et al., 2023).
- Cross-domain prediction: Gaze regression with per-group experts and cross-branch fusion generalizes across heterogeneous datasets and domains (Wang et al., 2024).
- Information verification: EMIF supports robust fake news identification using semantic divergence and evidence consistency (Dong et al., 2024).
- Cyber-physical security: EIF-based intrusion detection reduces false positives by fusing multi-domain, multi-location classifier outputs with uncertainty-aware decision metrics (Sahu et al., 2021).
EIF advances the rigor and reliability of multi-source information fusion under uncertainty, operationalizing probabilistic logic for both classification and regression across diverse, multi-modal data regimes.