Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trusted Multi-View Classification with Dynamic Evidential Fusion (2204.11423v3)

Published 25 Apr 2022 in cs.LG

Abstract: Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.

Citations (185)

Summary

  • The paper introduces a dynamic evidential fusion mechanism using Dempster-Shafer theory to integrate multi-view data reliably, even in noisy environments.
  • It employs uncertainty quantification with a Variational Dirichlet framework to provide interpretable predictions and robust trust measures.
  • Empirical validation shows the method significantly outperforms traditional techniques, achieving an 18.7 percentage point accuracy boost on the Scene15 dataset.

An Overview of Trusted Multi-View Classification with Dynamic Evidential Fusion

The paper "Trusted Multi-View Classification with Dynamic Evidential Fusion" presents an advanced algorithm tailored to address challenges in multi-view classification by ensuring both accuracy and reliability of predictions. The algorithm proposed, termed Trusted Multi-View Classification (TMC), diverges from conventional multi-view classification methods which primarily focus on enhancing accuracy through data integration, without adequately addressing the reliability of the predictions in the presence of noise, corruption, or out-of-distribution data.

Key Contributions

  • Dynamic Evidential Fusion: The core innovation in this research is the integration of a dynamic mechanism that leverages the Dempster-Shafer evidence theory to assess and fuse information from multiple views based on their reliability. This approach is particularly beneficial in scenarios involving noisy or incomplete data, as it allows the system to weigh the inputs and deemphasize the less reliable ones.
  • Uncertainty Quantification: The framework quantifies the uncertainty inherent in predictions using a Variational Dirichlet distribution, which maps evidence from each view to derive a belief and overall uncertainty. This uncertainty estimation is integral to the fusion strategy, providing a robust mechanism for trusted decision-making.
  • Theoretical and Empirical Validation: The paper provides a comprehensive theoretical analysis of the model’s capabilities, including propositions that demonstrate how additional views can enrich classification accuracy and reliability. Additionally, empirical results on multiple datasets exemplify the model's superiority in accuracy, AUROC, and robustness compared to existing methods.

Experimental Evaluation

The algorithm was evaluated across multiple datasets, including the Handwritten dataset, CUB, PIE, Caltech101, Scene15, and HMDB, reflecting diverse application domains. The results showed that TMC, as well as its enhancement with a pseudo-view termed ETMC, outperformed traditional methods. For instance, on the Scene15 dataset, ETMC achieved an increase in accuracy of around 18.7 percentage points compared to the next best method, indicating its robust handling of multi-view data, particularly when one or more views are noisy.

Implications and Future Directions

The model’s ability to dynamically evaluate and integrate different views at the level of evidential predictions offers significant implications for safety-critical applications where trusted decisions are paramount, such as autonomous driving and medical diagnostics. The ability to obtain confidence measures alongside predictions can improve interpretability and trust, key aspects for deployment in such domains.

Looking forward, the adaptation of this framework in real-time applications where data from various sensors are continuously streaming could be explored. Additionally, extending the approach to further exploit temporal dynamics in time-series multi-view data could present innovative avenues for research, especially in contexts where sequential decision-making is crucial.

In conclusion, this paper lays the groundwork for developing multi-view learning systems capable of delivering both high accuracy and reliable, interpretable predictions, setting a valuable precedent for future research in robust AI systems.