2000 character limit reached
Char2char Generation with Reranking for the E2E NLG Challenge (1811.05826v1)
Published 4 Nov 2018 in cs.CL and cs.LG
Abstract: This paper describes our submission to the E2E NLG Challenge. Recently, neural seq2seq approaches have become mainstream in NLG, often resorting to pre- (respectively post-) processing delexicalization (relexicalization) steps at the word-level to handle rare words. By contrast, we train a simple character level seq2seq model, which requires no pre/post-processing (delexicalization, tokenization or even lowercasing), with surprisingly good results. For further improvement, we explore two re-ranking approaches for scoring candidates. We also introduce a synthetic dataset creation procedure, which opens up a new way of creating artificial datasets for Natural Language Generation.
- Shubham Agarwal (34 papers)
- Marc Dymetman (21 papers)
- Eric Gaussier (27 papers)