Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Token-wise CNN-based Method for Sentence Compression (2009.11260v1)

Published 23 Sep 2020 in cs.CL

Abstract: Sentence compression is a NLP task aimed at shortening original sentences and preserving their key information. Its applications can benefit many fields e.g. one can build tools for language education. However, current methods are largely based on Recurrent Neural Network (RNN) models which suffer from poor processing speed. To address this issue, in this paper, we propose a token-wise Convolutional Neural Network, a CNN-based model along with pre-trained Bidirectional Encoder Representations from Transformers (BERT) features for deletion-based sentence compression. We also compare our model with RNN-based models and fine-tuned BERT. Although one of the RNN-based models outperforms marginally other models given the same input, our CNN-based model was ten times faster than the RNN-based approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Weiwei Hou (3 papers)
  2. Hanna Suominen (17 papers)
  3. Piotr Koniusz (84 papers)
  4. Sabrina Caldwell (11 papers)
  5. Tom Gedeon (72 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.