Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A C-LSTM Neural Network for Text Classification (1511.08630v2)

Published 27 Nov 2015 in cs.CL

Abstract: Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.

An In-depth Analysis of C-LSTM Neural Network for Text Classification

The paper "A C-LSTM Neural Network for Text Classification" by Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis C.M. Lau proposes a novel neural network architecture—C-LSTM—which synergizes the strengths of Convolutional Neural Networks (CNNs) and Long Short-term Memory Networks (LSTMs) to enhance the efficacy of sentence representation and text classification tasks.

Introduction

The paper begins by highlighting the challenges in traditional sentence modeling, which often suffers from the curse of dimensionality and the inability to capture word order effectively. To address these issues, the paper introduces C-LSTM, which combines CNNs' ability to learn local n-gram features and LSTMs' specialization in capturing long-term dependencies in sequences. By integrating these architectures, C-LSTM aims to capture both localized phrase-level representations and global, temporal sentence semantics.

Model Architecture

The C-LSTM model consists of two primary components:

  1. Convolutional Layer: This layer extracts higher-level n-gram features from the input sentences. Each filter convolves with word vectors to generate a sequence of feature maps, which are then fed into the LSTM.
  2. LSTM Layer: This layer captures long-term dependencies over the sequence of higher-order n-gram features generated by the CNN.

Distinctively, the C-LSTM model does not employ pooling after the convolution operation to preserve the sequential nature of the data before feeding it into the LSTM. This design enables the model to maintain a balance between capturing localized and global features.

Experimental Evaluation

The paper evaluates the C-LSTM architecture on two tasks: sentiment classification using the Stanford Sentiment Treebank (SST) dataset and question type classification using the TREC dataset. The results show that C-LSTM performs competitively against state-of-the-art models in both tasks.

Sentiment Classification

For sentiment classification, the C-LSTM model achieves an accuracy of 49.2% for fine-grained classification and 87.8% for binary classification on the SST dataset. These results indicate that C-LSTM is effective in capturing nuanced sentiment information, outperforming several strong baseline models, including standard CNNs and LSTMs.

Question Type Classification

In the TREC dataset, the C-LSTM model attains an accuracy of 94.6%, surpassing other neural network-based models and closely approaching the performance of SVM classifiers that rely on extensive handcrafted features. This demonstrates the model's capability in learning semantic representations that capture the intent of questions accurately.

Model Analysis

An intriguing aspect of the paper is the analysis of filter configurations in the convolutional layer. Contrary to the intuition that multiple filter lengths would perform better, the paper finds that a single convolutional layer with a filter length of 3 achieves the highest accuracy. This suggests that the tri-gram features are particularly effective in capturing local contextual information necessary for the LSTM to learn meaningful temporal dependencies.

Implications and Future Work

The introduction of C-LSTM has both practical and theoretical implications. Practically, it presents a robust model that integrates CNN and LSTM architectures, offering an end-to-end solution for a range of text classification tasks without relying on external linguistic resources like syntactic parse trees. Theoretically, it opens avenues for further exploration into the integration of different neural architectures to leverage their respective strengths.

Future work could investigate enhancements such as tensor-based operations or tree-structured convolutions to produce more structured and compact feature representations, potentially improving LSTM's performance further. Additionally, extending the application of C-LSTM to other NLP tasks like LLMing, machine translation, and more complex document-level classifications could yield valuable insights.

Conclusion

The C-LSTM model presents a compelling approach to sentence representation and text classification by effectively combining the strengths of CNNs and LSTMs. With promising results demonstrated across sentiment and question type classification tasks, C-LSTM sets a solid foundation for future research into hybrid neural network models for natural language processing applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chunting Zhou (36 papers)
  2. Chonglin Sun (7 papers)
  3. Zhiyuan Liu (433 papers)
  4. Francis C. M. Lau (40 papers)
Citations (842)