Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Experimental Study of LSTM Encoder-Decoder Model for Text Simplification (1609.03663v1)

Published 13 Sep 2016 in cs.CL and cs.LG

Abstract: Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning. Current automatic TS techniques are limited to either lexical-level applications or manually defining a large amount of rules. Since deep neural networks are powerful models that have achieved excellent performance over many difficult tasks, in this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder model for sentence level TS, which makes minimal assumptions about word sequence. We conduct preliminary experiments to find that the model is able to learn operation rules such as reversing, sorting and replacing from sequence pairs, which shows that the model may potentially discover and apply rules such as modifying sentence structure, substituting words, and removing words for TS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tong Wang (144 papers)
  2. Ping Chen (123 papers)
  3. Kevin Amaral (2 papers)
  4. Jipeng Qiang (22 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.