Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data (2011.03877v1)
Abstract: Natural language generation (NLG) is a critical component in conversational systems, owing to its role of formulating a correct and natural text response. Traditionally, NLG components have been deployed using template-based solutions. Although neural network solutions recently developed in the research community have been shown to provide several benefits, deployment of such model-based solutions has been challenging due to high latency, correctness issues, and high data needs. In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each. Our results show that domain complexity dictates the appropriate approach to achieve high data efficiency. Finally, we distill the lessons from our experimental findings into a list of best practices for production-level NLG model development, and present them in a brief runbook. Importantly, the end products of all of the techniques are small sequence-to-sequence models (2Mb) that we can reliably deploy in production.
- Ankit Arun (2 papers)
- Soumya Batra (4 papers)
- Vikas Bhardwaj (9 papers)
- Ashwini Challa (2 papers)
- Pinar Donmez (4 papers)
- Peyman Heidari (3 papers)
- Hakan Inan (8 papers)
- Shashank Jain (7 papers)
- Anuj Kumar (58 papers)
- Shawn Mei (1 paper)
- Karthik Mohan (10 papers)
- Michael White (10 papers)