Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reliability-Likelihood Fusion Rule

Updated 22 January 2026
  • The reliability–likelihood fusion rule is a probabilistic late fusion technique that weights contributions based on quantified reliability measures such as KL divergence and ROC properties.
  • It implements dual architectures using probabilistic circuits for multi-modal inference and a linear fusion rule for distributed sensor networks.
  • Empirical results demonstrate robust performance with graceful degradation under noise, effective handling of missing data, and optimized energy allocation using KKT-based power strategies.

The reliability–likelihood fusion rule refers to a class of probabilistically principled late fusion techniques in both multi-modal discriminative learning and distributed detection, in which the contributions of individual sources are weighted according to an explicit measure of their reliability or credibility. This approach is designed to ensure robust inference in noisy, multi-source scenarios by discounting unreliable modalities or sensors and emphasizing those with higher evidential value. The rule is instantiated in recent works as both a credibility-weighted mean over probabilistic expert outputs using probabilistic circuits for multi-modal fusion (Sidheekh et al., 2024), and as a linear fusion rule (LFR) weighted by local reliability and channel quality for distributed detection in sensor networks (Aldalahmeh et al., 2019).

1. Formal Definition and Foundations

The reliability–likelihood fusion rule fuses local distributions or statistics—such as modality-specific predictive posteriors or cluster-level sensor reports—into a global inference by assigning weights proportional to their estimated reliability. The key principle is that each source’s weight is determined by a data-driven, probabilistically grounded reliability measure.

  • Multi-modal fusion view (Sidheekh et al., 2024):
    • For MM modalities, each unimodal predictor yields pj(y)=P(Y=y ∣ Xj=xj)p_j(y) = P(Y=y\,|\,X_j=x_j). The joint distribution is modeled with a probabilistic circuit (PC) Pθ(Y,p1:M)P_\theta(Y,\mathbf{p}_{1:M}).
    • The credibility Cj\mathcal{C}_j of modality jj is the divergence (e.g., KL-divergence) between the full-fusion posterior P(Y∣p1:M)P(Y \mid \mathbf{p}_{1:M}) and the leave-one-out posterior P(Y∣p−j)P(Y \mid \mathbf{p}_{-j}).
    • The credibility weights are then normalized: C~j=Cj/∑i=1MCi\tilde{\mathcal{C}}_j = \mathcal{C}_j / \sum_{i=1}^M \mathcal{C}_i.
    • The fused prediction is a convex combination: P(y∣x1:M)=∑j=1MC~j pj(y)P(y \mid x_{1:M}) = \sum_{j=1}^M \tilde{\mathcal{C}}_j\, p_j(y).
  • Distributed detection view (Aldalahmeh et al., 2019):
    • Each cluster kk provides a summary statistic ZkZ_k, with reliability determined by cluster ROC properties and communication channel SNR.
    • The LFR computes the global test statistic TLFR=∑k=1Mdk ZkT_{\rm LFR} = \sum_{k=1}^M d_k\,Z_k, where dkd_k is proportional to the difference in local detection rates and inversely to channel noise: dk=Nk(Pd,k−Pf,k)Pk/(Pkσs,k2+σc2)d_k = N_k (P_{d,k}-P_{f,k}) \sqrt{P_k} / (P_k \sigma_{s,k}^2 + \sigma_c^2).

In both cases, fusion weights are nonnegative and sum to 1 (for distribution fusion) or maximize discriminability (for detection), ensuring mathematical validity.

2. Computation of Reliability or Credibility Weights

Multi-modal Fusion with Probabilistic Circuits

  • Credibility computation: For each modality jj,

Cj=KL(P(Y∣p1:M)∥P(Y∣p−j))\mathcal{C}_j = \mathrm{KL}\left(P(Y \mid \mathbf{p}_{1:M}) \| P(Y \mid \mathbf{p}_{-j})\right)

where P(Y∣p1:M)P(Y \mid \mathbf{p}_{1:M}) and P(Y∣p−j)P(Y \mid \mathbf{p}_{-j}) are efficiently computed using upward and downward passes in the learned PC, marginalizing out the relevant features.

  • Normalization: All Cj\mathcal{C}_j are aggregated to yield the relative credibility C~j\tilde{\mathcal{C}}_j.
  • Fused output: The final prediction is pfused(y)=∑jC~jpj(y)p_{\rm fused}(y) = \sum_j \tilde{\mathcal{C}}_j p_j(y).

Distributed Detection using the Linear Fusion Rule

  • Cluster weights: For cluster kk,

dk=Nk(Pd,k−Pf,k)PkPkσs,k2+σc2d_k = N_k (P_{d,k} - P_{f,k}) \frac{\sqrt{P_k}}{P_k \sigma_{s,k}^2 + \sigma_c^2}

where NkN_k is the number of nodes, Pd,kP_{d,k}/Pf,kP_{f,k} are cluster detection/false alarm probabilities, PkP_k is transmission power, and σs,k2\sigma_{s,k}^2, σc2\sigma_c^2 are noise variances on SN-CH and CH-FC links.

  • Estimation: Unknown detection parameters can be estimated via approximate ML over multiple time slots.

3. Fusion Formulations and Implementation

The reliability–likelihood fusion can be instantiated in two primary architectures:

Table: Core Fusion Rule Formulations

Setting Weight Calculation Fused Rule
Multi-Modal Fusion C~j=Cj/∑iCi\tilde{\mathcal{C}}_j = \mathcal{C}_j / \sum_i \mathcal{C}_i P(y∣x1:M)=∑jC~jpj(y)P(y|x_{1:M}) = \sum_j \tilde{\mathcal{C}}_j p_j(y)
Distributed Detection dkd_k as above T=∑kdkZk≷ΓT = \sum_k d_k Z_k \gtrless \Gamma

Both allow efficient implementation: multi-modal fusion leverages PC operations scaling linearly in MM; the LFR is efficiently computable given cluster statistics and estimation methods for local rates.

4. Theoretical Properties and Robustness

The reliability–likelihood fusion rule exhibits several desirable properties:

  • Commutativity and associativity: The order of fusion and grouping of modalities do not affect the result, owing to the symmetry of convex sums and linear combinations (Sidheekh et al., 2024).
  • Robustness to noise: Less reliable or noisy modalities/sensors are downweighted as their credibility or ROC gap diminishes. For multi-modal fusion, Theorem 1 shows that the expected credibility satisfies E[Cj]≥−H(FÏ•j∣F−j)\mathbb{E}[\mathcal{C}_j] \ge -H(\mathcal{F}_{\phi_j} \mid F^{-j}), so higher entropy (noisier) predictors have lower influence.
  • Graceful degradation: In both empirical multi-modal [(Sidheekh et al., 2024), see Figs. 3–5] and sensor network [(Aldalahmeh et al., 2019), simulation section] settings, the fusion rule ensures slow performance degradation under noise or partial failure compared to unweighted or naïve methods.
  • Handling missing data: In the PC-based scheme, any absent modality can be marginalized exactly using the PC, and credibilities recalculated over the remaining sources.

5. Practical Implementation Procedures

  • Multi-modal setting (Sidheekh et al., 2024): At test time, evaluate unimodal posteriors, compute full and leave-one-out PC conditionals, obtain Cj\mathcal{C}_j and weights, then output the convex combination of posteriors. If the Direct-PC method is used, skip credibility computation and use the joint conditional directly.
  • Distributed setting (Aldalahmeh et al., 2019): Compute or estimate cluster ROC parameters, form linear combination of statistics ZkZ_k with weights dkd_k; for unknown rates, use the LFR–aML approach for on-line estimation. Power can be allocated across clusters using KKT-optimized water-filling to maximize overall reliability at minimal cost.

Given p1=(0.2,0.8)p_1 = (0.2, 0.8) (image) and p2=(0.6,0.4)p_2 = (0.6, 0.4) (audio), the PC yields Pfull=(0.3,0.7),P−1=(0.5,0.5),P−2=(0.1,0.9)P_{\rm full} = (0.3, 0.7), P_{-1} = (0.5, 0.5), P_{-2} = (0.1, 0.9). KL-based credibilities C1≈0.09C_1 \approx 0.09, C2≈0.08C_2 \approx 0.08 yield normalized weights C~1≈0.53\tilde{C}_1 \approx 0.53, C~2≈0.47\tilde{C}_2 \approx 0.47, and the fused prediction (0.39,0.61)(0.39, 0.61) with final decision 1.

6. Empirical Results and System Optimization

  • Multi-modal PC-based fusion (Sidheekh et al., 2024): Experiments demonstrate high robustness; when a modality is corrupted, its weight vanishes and global performance (F1, AUROC) degrades moderately rather than catastrophically.
  • Distributed detection LFR (Aldalahmeh et al., 2019): Simulations confirm that LFR closely tracks optimal LLR performance in high-SNR scenarios, significantly outperforms the counting rule in low-to-moderate SNR, and that online estimation (LFR–aML) entails only minor additional loss. KKT-based power allocation secures up to ~84% transmission energy savings for only ~5% detection loss, with tail bounds accurately predicting PFP_F and PDP_D.

7. Significance and Applicability

The reliability–likelihood fusion rule provides a general, mathematically grounded framework for late fusion in both machine learning and distributed detection contexts, with strong theoretical guarantees and demonstrated empirical efficacy. Its reliance on probabilistic principles enables optimal leveraging of heterogeneous sources in dynamic, noisy, or partially observed environments, and its tractable nature supports scalable and interpretable deployment (Sidheekh et al., 2024, Aldalahmeh et al., 2019).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Reliability-Likelihood Fusion Rule.