Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing (1906.08104v1)

Published 19 Jun 2019 in cs.CL

Abstract: We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans might perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yue Dong (61 papers)
  2. Zichao Li (36 papers)
  3. Mehdi Rezagholizadeh (78 papers)
  4. Jackie Chi Kit Cheung (57 papers)
Citations (154)

Summary

We haven't generated a summary for this paper yet.