Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-modal Fusion using Fine-tuned Self-attention and Transfer Learning for Veracity Analysis of Web Information (2109.12547v1)

Published 26 Sep 2021 in cs.SI

Abstract: The nuisance of misinformation and fake news has escalated many folds since the advent of online social networks. Human consciousness and decision-making capabilities are negatively influenced by manipulated, fabricated, biased or unverified news posts. Therefore, there is a high demand for designing veracity analysis systems to detect fake information contents in multiple data modalities. In an attempt to find a sophisticated solution to this critical issue, we proposed an architecture to consider both the textual and visual attributes of the data. After the data pre-processing is done, text and image features are extracted from the training data using separate deep learning models. Feature extraction from text is done using BERT and ALBERT LLMs that leverage the benefits of bidirectional training of transformers using a deep self-attention mechanism. The Inception-ResNet-v2 deep neural network model is employed for image data to perform the task. The proposed framework focused on two independent multi-modal fusion architectures of BERT and Inception-ResNet-v2 as well as ALBERT and Inception-ResNet-v2. Multi-modal fusion of textual and visual branches is extensively experimented and analysed using concatenation of feature vectors and weighted averaging of probabilities named as Early Fusion and Late Fusion respectively. Three publicly available broadly accepted datasets All Data, Weibo and MediaEval 2016 that incorporates English news articles, Chinese news articles, and Tweets correspondingly are used so that our designed framework's outcomes can be properly tested and compared with previous notable work in the domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Priyanka Meel (5 papers)
  2. Dinesh Kumar Vishwakarma (35 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.