Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text Generation with Text-Editing Models (2206.07043v1)

Published 14 Jun 2022 in cs.CL

Abstract: Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, simplification, and style transfer. These tasks share a common trait - they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of text-editing models and current state-of-the-art approaches, and analyzes their pros and cons. We discuss challenges related to productionization and how these models can be used to mitigate hallucination and bias, both pressing challenges in the field of text generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Eric Malmi (26 papers)
  2. Yue Dong (61 papers)
  3. Jonathan Mallinson (13 papers)
  4. Aleksandr Chuklin (9 papers)
  5. Jakub Adamek (7 papers)
  6. Daniil Mirylenka (3 papers)
  7. Felix Stahlberg (31 papers)
  8. Sebastian Krause (9 papers)
  9. Shankar Kumar (34 papers)
  10. Aliaksei Severyn (29 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.