Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Context-Sensitive Convolutional Filters for Text Processing (1709.08294v3)

Published 25 Sep 2017 in cs.CL, cs.LG, and stat.ML

Abstract: Convolutional neural networks (CNNs) have recently emerged as a popular building block for NLP. Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-aware filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dinghan Shen (34 papers)
  2. Martin Renqiang Min (44 papers)
  3. Yitong Li (95 papers)
  4. Lawrence Carin (203 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.