Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Generic Sentence Representations Using Convolutional Neural Networks (1611.07897v2)

Published 23 Nov 2016 in cs.CL and cs.LG

Abstract: We propose a new encoder-decoder approach to learn distributed sentence representations that are applicable to multiple purposes. The model is learned by using a convolutional neural network as an encoder to map an input sentence into a continuous vector, and using a long short-term memory recurrent neural network as a decoder. Several tasks are considered, including sentence reconstruction and future sentence prediction. Further, a hierarchical encoder-decoder model is proposed to encode a sentence to predict multiple future sentences. By training our models on a large collection of novels, we obtain a highly generic convolutional sentence encoder that performs well in practice. Experimental results on several benchmark datasets, and across a broad range of applications, demonstrate the superiority of the proposed model over competing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhe Gan (135 papers)
  2. Yunchen Pu (20 papers)
  3. Ricardo Henao (71 papers)
  4. Chunyuan Li (122 papers)
  5. Xiaodong He (162 papers)
  6. Lawrence Carin (203 papers)
Citations (98)

Summary

We haven't generated a summary for this paper yet.