Papers
Topics
Authors
Recent
Search
2000 character limit reached

Entropy-Aware Spatial Fusion

Updated 12 January 2026
  • Entropy-aware spatial fusion combines distributed data using uncertainty measures to enhance robustness and accuracy in fields like medical imaging and sensor fusion.
  • In medical imaging, entropy-aware fuzzy integrals improve intracranial hemorrhage diagnosis by emphasizing high-confidence CT slices in decision-making.
  • Entropy regularization in sensor fusion yields stable source localization even under spatial misalignment, utilizing optimal mass transport methods.

Entropy-aware spatial fusion refers to a set of methodologies that combine spatially distributed information—such as slice-level predictions, sensor-array measurements, or latent image features—by leveraging explicit measures of uncertainty (entropy) during the fusion process. Unlike naïve averaging or majority-vote schemes, these frameworks adaptively weight or regularize spatial contributions according to their estimated information content, thereby improving robustness, accuracy, and computational efficiency across multiple domains such as medical imaging, source localization, and neural coding (Chagahi et al., 11 Mar 2025, &&&1&&&, Khoshkhahtinat et al., 2024).

1. Formalization: Entropy as a Guide for Spatial Aggregation

In entropy-aware spatial fusion, entropy quantifies the uncertainty associated with each spatial unit. For classification tasks over spatial stacks (e.g., CT slices), entropy may be defined as E(si)=1maxkP(si,ck)E(s_i) = 1 - \max_k P(s_i, c_k), where P(si,ck)P(s_i, c_k) is the classifier's softmax response per class. Slices or spatial elements that yield ambiguous outputs (i.e., high entropy) exert reduced influence on the final fused decision.

In sensor fusion and optimal mass transport (OMT), entropy regularization smooths fusion by penalizing overly concentrated solutions. For a spatial spectrum sΔNs \in \Delta_N on a grid, the Shannon entropy H(s)=ksklogskH(s) = \sum_k s_k \log s_k is included in barycenter objectives to encourage spatially distributed, robust solutions under misalignment (Elvander et al., 2018).

In neural codecs, channel- and position-wise entropy of the latent code guides context modeling and factorization during compression and decompression, allowing efficient and accurate global-spatial fusion (Khoshkhahtinat et al., 2024).

2. Entropy-Aware Fusion via Fuzzy Integrals in Medical Imaging

A notable example of entropy-aware spatial fusion is the entropy-aware fuzzy integral scan-level decision aggregation for intracranial hemorrhage (ICH) diagnosis in brain CTs (Chagahi et al., 11 Mar 2025). The workflow consists of:

  • Slice-level probability estimation: Each slice sis_i of a CT scan yields a confidence vector P(si)P(s_i) over classes.
  • Entropy computation: For each sis_i, compute E(si)=1maxkP(si,ck)E(s_i)=1-\max_k P(s_i,c_k).
  • Fuzzy densities: Set μ({si})=maxkP(si,ck)\mu(\{s_i\}) = \max_k P(s_i, c_k), reflecting the slice’s most confident prediction.
  • Choquet integration: Slices are sorted by μ\mu; suffix set measures μ(Ai)\mu(A_i) are computed recursively via the Sugeno–λ\lambda formula:

μ(Ai)=j=inμ({sj})+λj=inμ({sj})\mu(A_i) = \sum_{j=i}^n \mu(\{s_j\}) + \lambda \prod_{j=i}^n \mu(\{s_j\})

with λ\lambda grid-searched in (1,0)(-1,0). For each class ckc_k, the Choquet integral is

F(S,ck)=i=1n(P(si,ck)P(si1,ck))μ(Ai)\mathcal{F}(S, c_k) = \sum_{i=1}^n (P(s_i, c_k)-P(s_{i-1}, c_k))\,\mu(A_i)

  • Scan-level output: The class with maximal F(S,ck)\mathcal{F}(S, c_k) is selected as the scan diagnosis.

This approach robustly downweights noisy, low-information slices and models inter-slice synergy, outperforming mean/vote-based and MLP-based fusion both in accuracy and noise robustness, while remaining computationally light (Chagahi et al., 11 Mar 2025).

3. Entropy Regularized Optimal Mass Transport for Sensor Fusion

In non-coherent sensor fusion for source localization, entropy-aware spatial fusion is realized by entropy-regularized OMT barycenters (Elvander et al., 2018). Key steps include:

  • Model spectrum: Each sensor array provides a spectrum estimate μj\mu_j on a spatial grid; global target is the barycenter ss.
  • OMT cost: Wasserstein distances Wϵ(s,μj)W_\epsilon(s, \mu_j) encode spatial displacement, regularized by entropy parameter ϵ>0\epsilon > 0.
  • Optimization: The joint fusion problem is

minsΔNjwjWϵ(s,μj)+ϵH(s)\min_{s \in \Delta_N} \sum_j w_j W_\epsilon(s, \mu_j) + \epsilon H(s)

  • Sinkhorn algorithm: Efficient dual updates leverage entropy regularization for differentiable, parallelizable barycenter computation:

s(j(Kuj)wj)1/wjs \leftarrow \left( \prod_j (K^\top u_j)^{w_j} \right)^{1/\sum w_j}

where K=exp(C/ϵ)K = \exp(-C/\epsilon) encodes spatial costs.

  • Robustness: Finite ϵ\epsilon smooths transport plans, yielding barycenters stable under misalignment and sensor perturbations.

Experimental results demonstrate superior robustness compared to traditional fusion (MUSIC, MVDR), especially under increasing geometric misalignment (Elvander et al., 2018).

4. Entropy Model-Guided Spatial Fusion in Neural Codecs

Neural image codecs exploit entropy-aware spatial fusion to optimize compression efficiency and reconstruction quality (Khoshkhahtinat et al., 2024). The approach features:

  • Latent partitioning: Feature tensor y^RH×W×M\hat y \in \mathbb{R}^{H \times W \times M} is partitioned into JJ channel chunks.
  • Hierarchical context model: Density model is factorized chunkwise:

p(y^)=jp(y^(j)y^(<j),Clocal(j),Cglobal(j))p(\hat y) = \prod_j p\left(\hat y^{(j)} \mid \hat y^{(<j)}, C_{\text{local}}^{(j)}, C_{\text{global}}^{(j)}\right)

where Clocal(j)C_{\text{local}}^{(j)} is a convolution-based local context, Cglobal(j)C_{\text{global}}^{(j)} is derived via a Swin-Transformer with a trainable Laplacian-shaped positional encoding, its scale adapted per chunk's inferred entropy.

  • Fusion: Context vectors and previously decoded chunks are concatenated and processed by an MLP to provide spatially informed Gaussian parameters for entropy coding.
  • Acceleration: The model permits O(J)O(J) sequential steps (per chunk and anchor/non-anchor split) versus O(HW)O(HW) for fully serial models, reducing decoding latency to ≈200 ms per image.

This yields improved BD-rate and perceptual metrics, showing that entropy-guided fusion of spatial contexts improves both coding efficacy and runtime (Khoshkhahtinat et al., 2024).

5. Algorithmic Workflows and Computational Properties

A selection of representative entropy-aware spatial fusion algorithms is summarized below.

Domain Entropy Role Main Fusion Mechanism
Medical CT Slice-wise weighting Choquet/fuzzy integral (Chagahi et al., 11 Mar 2025)
Sensor arrays Regularized barycenter Entropy-regularized OMT (Elvander et al., 2018)
Neural codec Adaptive context modeling Entropy-guided chunk fusion (Khoshkhahtinat et al., 2024)

These algorithms exhibit the following properties:

  • Linear or near-linear computational complexity in the number of spatial units.
  • Deterministic runtime due to absence of learned late-stage fusion layers.
  • Robustness to low-information or noisy spatial units, as high-entropy entities are adaptively down-weighted or smoothed.

6. Comparative Performance and Robustness

Empirical evaluations demonstrate consistently superior resilience of entropy-aware spatial fusion frameworks to noise and miscalibration. For example:

  • In ICH CT fusion, Choquet-integral based entropy-aware fusion surpasses average/voting/MLP aggregation in both accuracy and resistance to noisy slices, with negligible overhead (Chagahi et al., 11 Mar 2025).
  • In sensor localization, OMT barycenters show smaller degradation in localization error under array misalignment as compared to MUSIC, MVDR, and SPICE (Elvander et al., 2018).
  • In neural codecs, entropy-model guided chunk-based (parallel) fusion demonstrates ≈9% BD-rate savings and significantly reduced decoding time relative to prior auto-regressive or purely parallel baselines (Khoshkhahtinat et al., 2024).

These advantages stem from explicit modeling of information reliability and structured aggregation of spatial context.

7. Domain-Specific Adaptations and Hyperparameter Considerations

Implementations of entropy-aware spatial fusion utilize domain-appropriate entropy measures and hyperparameters:

  • Medical imaging frameworks employ maximum softmax probability inversely as entropy, grid-searching λ\lambda for fusion non-additivity (Chagahi et al., 11 Mar 2025).
  • Sensor fusion uses an entropy penalty ϵ\epsilon that regulates the spatial spread and regularity of the barycenter, typically chosen on the order of spatial resolution squared or cross-validated (Elvander et al., 2018).
  • Neural codecs learn Laplacian positional encoding parameters per entropy structure of the chunk, facilitating chunk-adaptive receptive fields (Khoshkhahtinat et al., 2024).

Proper hyperparameter selection is critical for balancing robustness, accuracy, and computational cost. For finite entropy penalty, the fusion is smooth and stable, while extremes of the parameter scale trade sharpness for resilience or vice versa.

References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Entropy-Aware Spatial Fusion.