2000 character limit reached
Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs (2110.13231v1)
Published 25 Oct 2021 in cs.CL
Abstract: We present a novel technique for zero-shot paraphrase generation. The key contribution is an end-to-end multilingual paraphrasing model that is trained using translated parallel corpora to generate paraphrases into "meaning spaces" -- replacing the final softmax layer with word embeddings. This architectural modification, plus a training procedure that incorporates an autoencoding objective, enables effective parameter sharing across languages for more fluent monolingual rewriting, and facilitates fluency and diversity in generation. Our continuous-output paraphrase generation models outperform zero-shot paraphrasing baselines when evaluated on two languages using a battery of computational metrics as well as in human assessment.
- Monisha Jegadeesan (1 paper)
- Sachin Kumar (68 papers)
- John Wieting (40 papers)
- Yulia Tsvetkov (142 papers)