Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual and Textual Sentiment Analysis Using Deep Fusion Convolutional Neural Networks (1711.07798v1)

Published 21 Nov 2017 in cs.CL, cs.CV, and cs.IR

Abstract: Sentiment analysis is attracting more and more attentions and has become a very hot research topic due to its potential applications in personalized recommendation, opinion mining, etc. Most of the existing methods are based on either textual or visual data and can not achieve satisfactory results, as it is very hard to extract sufficient information from only one single modality data. Inspired by the observation that there exists strong semantic correlation between visual and textual data in social medias, we propose an end-to-end deep fusion convolutional neural network to jointly learn textual and visual sentiment representations from training examples. The two modality information are fused together in a pooling layer and fed into fully-connected layers to predict the sentiment polarity. We evaluate the proposed approach on two widely used data sets. Results show that our method achieves promising result compared with the state-of-the-art methods which clearly demonstrate its competency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xingyue Chen (4 papers)
  2. Yunhong Wang (115 papers)
  3. Qingjie Liu (64 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.