2000 character limit reached
Benchmarking Multimodal Sentiment Analysis (1707.09538v1)
Published 29 Jul 2017 in cs.MM and cs.CL
Abstract: We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.
- Erik Cambria (136 papers)
- Devamanyu Hazarika (33 papers)
- Soujanya Poria (138 papers)
- Amir Hussain (75 papers)
- R. B. V. Subramaanyam (1 paper)