Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks (2004.13826v2)

Published 22 Apr 2020 in cs.CL

Abstract: Text classification is fundamental in NLP, and Graph Neural Networks (GNN) are recently applied in this task. However, the existing graph-based works can neither capture the contextual word relationships within each document nor fulfil the inductive learning of new words. In this work, to overcome such problems, we propose TextING for inductive text classification via GNN. We first build individual graphs for each document and then use GNN to learn the fine-grained word representations based on their local structures, which can also effectively produce embeddings for unseen words in the new document. Finally, the word nodes are aggregated as the document embedding. Extensive experiments on four benchmark datasets show that our method outperforms state-of-the-art text classification methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yufeng Zhang (67 papers)
  2. Xueli Yu (4 papers)
  3. Zeyu Cui (29 papers)
  4. Shu Wu (109 papers)
  5. Zhongzhen Wen (1 paper)
  6. Liang Wang (512 papers)
Citations (259)

Summary

Inductive Text Classification Using Graph Neural Networks: An Examination of TextING

Yufeng Zhang et al. present a novel approach to text classification with their method called TextING (Inductive Text classification via Graph neural networks), addressing key limitations in traditional and graph-based methods. This research represents a significant advancement in the application of Graph Neural Networks (GNNs), specifically designed to overcome the limitations in contextual word relationship capture and the inductive learning of new words.

Core Contributions and Methodology

TextING's primary contribution is the development of a graph neural network where each document is treated as an individual graph. This approach allows the model to capture fine-grained, document-specific word interactions, as opposed to traditional GNN-based text classification models that rely on a single, global graph structure. The paper outlines a comprehensive methodology consisting of three core components:

  1. Graph Construction: Each document is transformed into a graph where unique words are vertices, and word co-occurrences within a sliding window serve as edges. This local structure allows for the preservation of contextual relationships specific to each document.
  2. Graph-based Word Interaction: The Gated Graph Neural Networks (GGNN) are employed to update node representations iteratively. This ensures that information from word neighbors is integrated into each node's representation, capturing higher-order interactions across multiple layers.
  3. Readout Function: Post-training, word representations are aggregated to form a document-level embedding, which is then used for classification through a readout function that combines soft attention with non-linear transformations.

Experimental Evaluation

The authors validate their method across four benchmark datasets—MR, R8, R52, and Ohsumed—demonstrating that TextING outperforms state-of-the-art methods, including TextGCN. Notably, TextING shows robust performance in scenarios featuring a high proportion of unseen words during test phases (highlighted by its substantial improvement in the MR dataset). This confirms its inductive learning capabilities— a critical advancement over transductive methods that struggle with unseen word embeddings.

Numerical Results and Methodological Insight

The empirical results underscore TextING's superiority, achieving noteworthy accuracy improvements across all datasets. Particularly, TextING attains substantial gains on datasets with higher new word proportions, such as MR (seen in its nearly 2% increase over the best-performing baseline). Additionally, the model's sensitivity analyses exemplify its adaptability through varying graph densities and interaction steps.

Implications and Future Directions

The theoretical implication of treating each document as a distinct graph suggests a versatile framework that can potentially extend beyond text classification, perhaps into domains where context-specific relationships are pivotal, such as personalized recommendation systems or adaptive user modeling.

In future developments, expanding TextING's graph construction mechanisms to incorporate semantic and syntactic dependencies could augment its discriminative power. Furthermore, exploring hybrid models that integrate both TextING's local structures and traditional global graph methods might yield complementary insights and further enhance performance metrics.

Conclusion

Overall, TextING represents a compelling approach for document classification, with particular strength in generalizing across unseen linguistic contexts. This capability not only addresses previous shortcomings in text classification but also lays the groundwork for more contextually aware GNN applications in NLP. The research by Zhang et al. contributes a substantial methodological innovation, promoting a nuanced examination of text through the lens of graph-based representations.