Papers
Topics
Authors
Recent
2000 character limit reached

Evidential Inter-Intra Fusion (EIF)

Updated 4 January 2026
  • Evidential Inter-Intra Fusion (EIF) is a framework that integrates heterogeneous evidence using DS, Dirichlet, and NIG distributions to explicitly quantify uncertainty in both intra- and inter-fusion stages.
  • It combines multi-scale, multi-branch information across various domains such as occupancy grids, stereo matching, gaze regression, fake news detection, and intrusion detection using principled fusion rules.
  • EIF improves robustness and explainability by employing uncertainty-aware loss functions and evidence fusion techniques, resulting in enhanced performance across challenging real-world applications.

Evidential Inter-Intra Fusion (EIF) is a principled framework for integrating heterogeneous sources of evidence or multi-level models under explicit uncertainty quantification using formal theories such as Dempster-Shafer (DS), Dirichlet, and Normal-Inverse-Gamma (NIG) distributions. EIF orchestrates fusion both within a source/model (intra-fusion, e.g. multi-scale, multi-branch, or multi-local regressors) and across sources/models (inter-fusion, e.g. multiple datasets, views, sensors, or modalities) with principled evidence combination rules. EIF architectures have emerged in occupancy grid fusion for cooperative autonomous vehicles, stereo matching, cross-dataset regression, explainable fake-news detection, and multi-sensor intrusion detection (Kempen et al., 2023, Lou et al., 2023, Wang et al., 2024, Dong et al., 2024, Sahu et al., 2021).

1. Theoretical Foundations

EIF formalizes sources of information as belief assignments (DS masses, Dirichlet/nig parameters) expressing uncertainty about latent states or regression targets.

In occupancy grid mapping, each grid cell ii is associated with a mass function mi:2Θ[0,1]m_i:2^\Theta \rightarrow [0,1] over Θ={F,O}\Theta=\{F,O\} ("free", "occupied"), subject to mi,F+mi,O+ui=1m_{i,F}+m_{i,O}+u_i=1 with ui=mi(Θ)u_i = m_i(\Theta) as the uncertainty mass (Kempen et al., 2023). DS belief (BelBel), plausibility (PlPl), and pignistic probability (pp) are derived via standard transforms.

Evidential regression employs the normal-inverse-gamma (NIG) distribution as a conjugate prior over target mean and variance (Lou et al., 2023, Wang et al., 2024). The posterior NIG parameters (δ,γ,α,β)(\delta,\gamma,\alpha,\beta) encode both aleatoric (E[σ2]=β/(α1)\mathbb{E}[\sigma^2]=\beta/(\alpha-1)) and epistemic (Var[μ]=β/[γ(α1)]\operatorname{Var}[\mu]=\beta/[\gamma(\alpha-1)]) uncertainties.

For classifier-based fusion (intrusion detection), probabilities from ML classifiers are mapped to DS mass functions, and intra/inter fusion is performed using DS rules, eg:

  • Dempster’s normalized conjunctive rule
  • Disjunctive rule (for insufficient trust)
  • Cautious rule (least-committed combination) (Sahu et al., 2021)

2. Intra-Fusion Mechanisms

Intra-fusion synthesizes multiple sources within a given context (scale, location, data partition).

  • Stereo matching: ELFNet predicts evidential distributions (NIG) at three scales of cost volume. Intra-fusion is performed using the MoNIG rule:

δMoNIG=iγiδiiγi,γMoNIG=iγi,αMoNIG=iαi+1M,βMoNIG=iβi+1Miγi(δiδMoNIG)2\delta_{MoNIG} = \frac{\sum_i \gamma_i \delta_i}{\sum_i \gamma_i},\quad \gamma_{MoNIG} = \sum_i \gamma_i,\quad \alpha_{MoNIG} = \sum_i \alpha_i + \frac{1}{M},\quad \beta_{MoNIG} = \sum_i \beta_i + \frac{1}{M}\sum_i \gamma_i (\delta_i - \delta_{MoNIG})^2

Both means and uncertainties are evidence-weight averaged (Lou et al., 2023).

  • Cross-dataset regression: Each branch is partitioned into overlapping label subspaces, with local regressors trained on subsets. Intra-fusion (MoNIG) fuses GG local NIG heads per dataset (Wang et al., 2024). Local experts specialize to gaze intervals; overlap coefficients ensure robustness.
  • Fake news: Divergence selection identifies top-KK conflicting articles within the relevant news set; intra-fused features represent maximally divergent evidence (Dong et al., 2024).
  • Intrusion detection: Intra-domain fusion merges evidence across locations for the same physical/cyber domain using DS rules (Sahu et al., 2021).

3. Inter-Fusion Strategies

Inter-fusion aggregates evidence across sources, modalities, datasets, or model branches.

  • Occupancy grids: Two AV OGMs, after pose normalization, are fused cell-wise using Dempster’s rule (Kempen et al., 2023):

mO12=m1,Om2,O+m1,Ou2+u1m2,O1(m1,Fm2,O+m1,Om2,F)m^{1\oplus2}_O = \frac{m_{1,O} m_{2,O} + m_{1,O} u_2 + u_1 m_{2,O}}{1 - (m_{1,F} m_{2,O} + m_{1,O} m_{2,F})}

Deep CNNs jointly solve for both registration and fusion.

  • Stereo/disparity: ELFNet fuses local (cost-volume) and global (transformer, STTR) NIG branches via MoNIG, yielding unified evidential predictions (Lou et al., 2023).
  • Cross-dataset regression: All single-dataset branches and the cross-dataset branch outputs are inter-fused via MoNIG to synthesize cross-domain estimates (Wang et al., 2024).
  • Fake news: EMIF concatenates inter-source (co-attention of comments/news) and intra-source (divergent relevant news selection) features, penalizing inconsistency with KL-divergence, before final prediction (Dong et al., 2024).
  • Intrusion detection: Across physical and cyber domains, mass functions from both are fused by DS rule, followed by aggregation across sensor locations (Sahu et al., 2021).

4. Loss Functions and Training Protocols

EIF frameworks deploy uncertainty-aware losses to calibrate model confidence.

  • Occupancy grid fusion: The per-cell loss is

Li=(yi,Fp^i,F)2+p^i,F(1p^i,F)Si+1+(yi,Op^i,O)2+p^i,O(1p^i,O)Si+1\mathcal{L}_i = (y_{i,F}-\hat{p}_{i,F})^2 + \frac{\hat{p}_{i,F}(1-\hat{p}_{i,F})}{S_i+1} + (y_{i,O}-\hat{p}_{i,O})^2 + \frac{\hat{p}_{i,O}(1-\hat{p}_{i,O})}{S_i+1}

with an occupation-weight owo_w for class imbalance (Kempen et al., 2023).

  • Evidential regression (NIG): Training loss combines negative log-model evidence and a regularizer:

Levidence=LNLL+λLR\mathcal{L}_{evidence} = \mathcal{L}_{NLL} + \lambda\,\mathcal{L}_R

where

LNLL(δ,γ,α,β)=12lnπγαlnΩ+(α+12)ln((yδ)2γ+Ω)+lnΓ(α)Γ(α+12)\mathcal{L}_{NLL}(\delta,\gamma,\alpha,\beta) = \frac{1}{2}\ln\frac{\pi}{\gamma} - \alpha\ln\Omega + (\alpha+\frac{1}{2})\ln\left((y-\delta)^2\gamma+\Omega\right) + \ln\frac{\Gamma(\alpha)}{\Gamma(\alpha+\frac{1}{2})}

(Lou et al., 2023, Wang et al., 2024).

  • EMIF: KL-divergence inconsistency loss

LKL=qAqlogAqHinter,q\mathcal{L}_{KL} = \sum_{q} A'_q \log \frac{A'_q}{H_{inter,q}}

and standard cross-entropy, weighted by β\beta (Dong et al., 2024).

  • Intrusion detection: Multi-objective GA optimizes three metrics for feature selection: FBelF_{Bel}, FPlF_{Pl}, and FBetPF_{BetP}, minimizing error against true labels (Sahu et al., 2021).

5. Architectural Realizations

EIF is instantiated via various deep architectures:

  • Occupancy grid: DeepLabV3+ ResNet-50 backbone, four-channel input (masses per class), ASPP, evidential output heads; “one-pass” registration/fusion (Kempen et al., 2023).
  • Stereo matching: Cost-volume pyramids (multi-scale), STTR transformer branch, “trustworthy regression” evidential heads, serial intra/inter MoNIG modules (Lou et al., 2023).
  • Cross-dataset gaze: Modular branches per-source, local regressors for overlapping subspaces, shared backbone, high-level MFF fusion modules for cross-branch mixing (Wang et al., 2024).
  • Fake news: Bi-LSTM encoders, word-level attention, co-attention blocks, divergence selection, KL-consistency, final concatenation before classification (Dong et al., 2024).
  • Intrusion detection: Ensemble classifiers per location/domain, feature selection via NSGA-2, multi-rule mass combination; flexible fusion scheme (location/domain hierarchy) (Sahu et al., 2021).

6. Experimental Evaluation

EIF consistently improves accuracy, generalization, and robustness across domains.

Domain & Paper Baseline Accuracy EIF Accuracy Key Gains
Occupancy grid (Kempen et al., 2023) Dice_occ: 0.944 (misalign) 0.948 +4.5% Dice, half KLD at 5m/20°
Stereo (Lou et al., 2023) EPE: 0.42 px 0.33 px Outperforms STTR/PCWNet; SOTA
Gaze (Wang et al., 2024) Unseen: 7.20° 6.58° −0.62° avg error cross-domain
Fake news (Dong et al., 2024) F1: 80.3% 84.7% +4.4% F1, robust to source drop
Intrusion (Sahu et al., 2021) DT+RF: 96-97% +2–3 points Disjunctive > conjunctive > cautious

EIF’s robustness against noise, misalignment, and dataset shift is consistently validated via ablations demonstrating that both inter- and intra-fusion, plus explicit evidence/uncertainty modeling, are essential. In occupancy fusion, the deep CNN outperforms rule-based alignment and DS fusion for up to 5 m and 20° pose noise. In gaze estimation, MoNIG fusion across overlapping and cross-domain experts lowers error even for unseen domains. In fake news EMIF, explainability and resilience are gained by fusing comment/news co-attention and divergent external articles.

7. Applications and Significance

EIF is applicable to high-stakes scenarios demanding trustworthiness, explainability, and uncertainty calibration.

  • Cooperative vehicles: Real-time evidential OGM fusion supports digital twin creation for C-ITS, improving safety under significant pose error (Kempen et al., 2023).
  • Computer vision: Stereo disparity estimation with quantified uncertainties enables confidence-aware depth for downstream robotic/perception applications (Lou et al., 2023).
  • Cross-domain prediction: Gaze regression with per-group experts and cross-branch fusion generalizes across heterogeneous datasets and domains (Wang et al., 2024).
  • Information verification: EMIF supports robust fake news identification using semantic divergence and evidence consistency (Dong et al., 2024).
  • Cyber-physical security: EIF-based intrusion detection reduces false positives by fusing multi-domain, multi-location classifier outputs with uncertainty-aware decision metrics (Sahu et al., 2021).

EIF advances the rigor and reliability of multi-source information fusion under uncertainty, operationalizing probabilistic logic for both classification and regression across diverse, multi-modal data regimes.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Evidential Inter-Intra Fusion (EIF).