Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model (2102.11008v4)

Published 12 Feb 2021 in cs.CL

Abstract: We propose InsNet, an expressive insertion-based text generator with efficient training and flexible decoding (parallel or sequential). Unlike most existing insertion-based text generation works that require re-encoding of the context after each insertion operation and thus are inefficient to train, InsNet only requires one pass of context encoding for the entire sequence during training by introducing a novel insertion-oriented position encoding and a light-weighted slot representation strategy to enable computation sharing. Furthermore, we propose an algorithm InsNet-Dinic to better determine the parallelization of insertion operations that provides a controllable switch between parallel and sequential decoding, making it flexible to handle more parallelizable tasks such as machine translation with efficient decoding, or less parallelizable tasks such as open-domain text generation to guarantee high-quality outputs. Experiments on two lexically constrained text generation datasets and three machine translation datasets demonstrate InsNet's advantages over previous insertion-based methods in terms of training speed, inference efficiency, and generation quality.

Citations (12)

Summary

We haven't generated a summary for this paper yet.