Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aspect Level Sentiment Classification with Deep Memory Network (1605.08900v2)

Published 28 May 2016 in cs.CL

Abstract: We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Duyu Tang (65 papers)
  2. Bing Qin (186 papers)
  3. Ting Liu (329 papers)
Citations (872)

Summary

  • The paper introduces a novel deep memory network that uses iterative attention mechanisms to capture the significance of context words for improved sentiment classification.
  • The model outperforms traditional LSTM and SVM approaches, achieving accuracies of 72.21% on laptop and 80.95% on restaurant datasets.
  • The deep memory network offers significant computational efficiency, running about 15 times faster than basic LSTM models on CPU implementations.

Aspect Level Sentiment Classification with Deep Memory Network

The paper Aspect Level Sentiment Classification with Deep Memory Network by Duyu Tang, Bing Qin, and Ting Liu presents an innovative approach to aspect-level sentiment classification. This method leverages a deep memory network architecture to effectively capture the significance of each context word relative to a given aspect within a sentence, thereby enhancing sentiment classification performance.

Methodology Overview

The proposed approach distinguishes itself from traditional feature-based SVMs and sequential neural models such as LSTMs by employing a deep memory network. This network uses multiple computational layers, where each layer functions as a neural attention model over an external memory. The key components of this architecture include:

  1. Memory Network: Inspired by memory networks used in question answering, the system employs an extensible long-term memory component that is read, written to, and trained jointly with the prediction objective.
  2. Aspect and Context Representation: Each word in the sentence is mapped into an embedding vector. Aspect words (or averaged vectors in the case of multi-word aspects) and context words collectively form the memory matrix.
  3. Attention Mechanism: The attention model assigns a weight to each context word based on its relevance to the aspect. This model includes content-based attention and the more nuanced location-based attention, which incorporates spatial proximity between context words and the aspect word.

Experimental Evaluation

The authors evaluated their approach on SemEval 2014 laptop and restaurant datasets, providing a robust comparison against several baseline and advanced models, including LSTM, TDLSTM, TDLSTM+ATT, and ContextAVG. Key findings include:

  • Performance: Their deep memory network consistently outperformed LSTM and attention-based LSTM models, achieving accuracy comparable to state-of-the-art SVM systems. Notably, the nine-layer memory network yielded classification accuracies of 72.21% (laptop) and 80.95% (restaurant), marking a competitive edge over traditional methods.
  • Training Efficiency: The deep memory network demonstrated significant runtime efficiency. The implementation with nine layers was found to be approximately 15 times faster than basic LSTM models when executed on a CPU, affirming the computational advantages of the proposed method.
  • Attention Visualization: Visualization of attention weights across different hops revealed how the deep memory network adaptively focuses on relevant context words through iterative refinements, further validating its design rationale.

Implications and Future Work

The practical implications of this research are profound, particularly in enhancing the granularity and accuracy of sentiment analysis for customer feedback across diverse domains. Theoretically, the work propels the understanding of memory networks and attention mechanisms in NLP tasks. Future research could advance this framework by integrating syntactic structures, exploring richer embedding techniques, and experimenting with alternative attention mechanisms to further amplify model performance and interpretability.

Overall, this paper makes a substantial contribution to the field of sentiment analysis by introducing a scalable, efficient, and effective deep memory network model for aspect-level sentiment classification. It lays a solid foundation for subsequent explorations and innovations in leveraging attention mechanisms and memory networks within NLP.