Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Innovations in Neural Data-to-text Generation: A Survey (2207.12571v3)

Published 25 Jul 2022 in cs.CL

Abstract: The neural boom that has sparked NLP research through the last decade has similarly led to significant innovations in data-to-text generation (DTG). This survey offers a consolidated view into the neural DTG paradigm with a structured examination of the approaches, benchmark datasets, and evaluation protocols. This survey draws boundaries separating DTG from the rest of the natural language generation (NLG) landscape, encompassing an up-to-date synthesis of the literature, and highlighting the stages of technological adoption from within and outside the greater NLG umbrella. With this holistic view, we highlight promising avenues for DTG research that not only focus on the design of linguistically capable systems but also systems that exhibit fairness and accountability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (147)
  1. Internet argument corpus 2.0: An sql schema for dialogic social media and the corpora to go with it. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). 4445–4452.
  2. Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 3554–3565.
  3. Shubham Agarwal and Marc Dymetman. 2017. A surprisingly effective out-of-the-box char2char model on the E2E NLG Challenge dataset. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. 158–163.
  4. A neural knowledge language model. arXiv preprint arXiv:1608.00318 (2016).
  5. The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations. In Proceedings of the 12th International Conference on Natural Language Generation. 397–402.
  6. Construction of the Literature Graph in Semantic Scholar. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers). 84–91.
  7. Table-To-Text generation and pre-training with TabT5. In Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 6758–6766. https://aclanthology.org/2022.findings-emnlp.503
  8. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. 502–512.
  9. Luca Anselma and Alessandro Mazzei. 2018. Designing and testing the messages produced by a virtual dietitian. In 11th International Conference on Natural Language Generation (INLG 2018). Association for Computational Linguistics, 244–253.
  10. Generating Market Comments Referring to External Resources. In Proceedings of the 11th International Conference on Natural Language Generation. Association for Computational Linguistics, 135–139. https://doi.org/10.18653/v1/W18-6515
  11. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.
  12. Online Back-Parsing for AMR-to-Text Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1206–1219.
  13. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse. 178–186.
  14. Text generation from tables. IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, 2 (2018), 311–320.
  15. Regina Barzilay and Mirella Lapata. 2005. Collective Content Selection for Concept-to-Text Generation. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 331–338.
  16. Regina Barzilay and Lillian Lee. 2004. Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Association for Computational Linguistics, 113–120.
  17. Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics 31, 3 (2005), 297–328.
  18. Relational inductive biases, deep learning, and graph networks. CoRR abs/1806.01261 (2018). arXiv:1806.01261
  19. Graph-to-Sequence Learning using Gated Graph Neural Networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 273–283.
  20. Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Natural Language Engineering 14, 4 (2008), 431–455.
  21. Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval). In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval).
  22. Anja Belz and Albert Gatt. 2008. Intrinsic vs. extrinsic evaluation measures for referring expression generation. In Proceedings of ACL-08: HLT, Short Papers. 197–200.
  23. Anja Belz and Eric Kow. 2011. Discrete vs. continuous rating scales for language evaluation in NLP. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 230–235.
  24. Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval). In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval).
  25. Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems. INCOMA Ltd., Shoumen, Bulgaria, Varna, Bulgaria. https://aclanthology.org/2023.humeval-1.0
  26. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th conference of the european chapter of the association for computational linguistics. 313–320.
  27. The ReproGen shared task on reproducibility of human evaluations in NLG: Overview and results. In Proceedings of the 14th International Conference on Natural Language Generation. 249–258.
  28. Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics 6 (2018), 587–604.
  29. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5, 2 (1994), 157–166.
  30. Arianna Bisazza and Marcello Federico. 2016. A survey of word reordering in statistical machine translation: Computational models and language phenomena. Computational linguistics 42, 2 (2016), 163–205.
  31. TG Moher DC Mak B Blumenthal and LM Leventhal. 1993. Comparing the comprehensibility of textual and graphical programs: The case of Petri nets. In Empirical studies of programmers: fifth workshop. Ablex Publishing Corporation, 137–161.
  32. Better Rewards Yield Better Summaries: Learning to Summarise Without References. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3110–3120.
  33. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. 131–198.
  34. Hervé Bourlard and Yves Kamp. 1988. Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics 59, 4 (1988), 291–294.
  35. SaferDrive: An NLG-based behaviour change support system for drivers. Natural Language Engineering 24, 4 (2018), 551–588.
  36. Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th annual meeting of the association for computational linguistics. 286–293.
  37. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  38. (Meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation. 136–158.
  39. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799 (2020).
  40. Wallace Chafe. 1976. Givenness, contrastiveness, definiteness, subjects, topics, and point of view. Subject and topic (1976).
  41. An autoencoder approach to learning bilingual word representations. Advances in neural information processing systems 27 (2014).
  42. David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning. 128–135.
  43. Improving Sequence-to-Sequence Learning via Optimal Transport. In International Conference on Learning Representations.
  44. Towards Table-to-Text Generation with Pretrained Language Model: A Table Structure Understanding and Text Deliberating Approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates.
  45. WikiTableT: A Large-Scale Data-to-Text Dataset for Generating Wikipedia Article Sections. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 193–209.
  46. Enhancing neural data-to-text generation models with external background knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3022–3032.
  47. Logical Natural Language Generation from Open-Domain Tables. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 7929–7942.
  48. KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 8635–8648.
  49. De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 5532–5542.
  50. Tabfact: A large-scale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164 (2019).
  51. Few-Shot NLG with Pre-Trained Language Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 183–190.
  52. Learning to generate one-sentence biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. 633–642.
  53. Kyunghyun Cho. 2016. Noisy parallel approximate decoding for conditional recurrent language model. arXiv preprint arXiv:1605.03835 (2016).
  54. On the Properties of Neural Machine Translation: Encoder–Decoder Approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. 103–111.
  55. Once upon a time in visualization: Understanding the use of textual narratives for causality. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2020), 1332–1342.
  56. Sentence mover’s similarity: Automatic evaluation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2748–2760.
  57. Herbert H Clark. 1977. Comprehension and the given-new contract. Discourse production and comprehension (1977), 1–40.
  58. Pragmatically Informative Image Captioning with Character-Level Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). 439–443.
  59. Emilie Colin and Claire Gardent. 2019. Generating text from anonymised structures. In Proceedings of the 12th International Conference on Natural Language Generation. 112–117.
  60. Natural language processing (almost) from scratch. Journal of machine learning research 12, ARTICLE (2011), 2493–2537.
  61. Marco Damonte and Shay B Cohen. 2019. Structural Neural Encoders for AMR-to-text Generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3649–3658.
  62. Jan Milan Deriu and Mark Cieliebak. 2018. Syntactic manipulation for generating more diverse and interesting texts. In 11th International Conference on Natural Language Generation (INLG 2018), Tilburg, The Netherlands, 5-8 November 2018. Association for Computational Linguistics, 22–34.
  63. Cluster-based prediction of user ratings for stylistic surface realisation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 702–711.
  64. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171–4186.
  65. Handling Divergent Reference Texts when Evaluating Table-to-Text Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4884–4895.
  66. Nicholas Diakopoulos. 2019. Automating the news. In Automating the News. Harvard University Press.
  67. LIFT: Language-Interfaced Fine-Tuning for Non-language Machine Learning Tasks. In Advances in Neural Information Processing Systems.
  68. GTR-LSTM: A triple encoder for sentence generation from RDF data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1627–1637.
  69. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research. 138–145.
  70. A Survey of Natural Language Generation. arXiv preprint arXiv:2112.11739 (2021).
  71. Li Dong and Mirella Lapata. 2018. Coarse-to-Fine Decoding for Neural Semantic Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 731–742.
  72. Classifying Relations by Ranking with Convolutional Neural Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 626–634.
  73. Timothy Dozat and Christopher D Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. (2017).
  74. Question Generation for Question Answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 866–874. https://doi.org/10.18653/v1/D17-1090
  75. Pablo Ariel Duboue and Kathleen R. McKeown. 2003. Statistical Acquisition of Content Selection Rules for Natural Language Generation. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. 121–128.
  76. Sebastian Duerr and Peter A Gloor. 2021. Persuasive Natural Language Generation–A Literature Review. arXiv preprint arXiv:2101.05786 (2021).
  77. Learning from Multiple Sources for Data-to-Text and Text-to-Data. In International Conference on Artificial Intelligence and Statistics (AISTATS).
  78. Semantic Noise Matters for Neural Natural Language Generation. In Proceedings of the 12th International Conference on Natural Language Generation. 421–426.
  79. Ondřej Dušek and Filip Jurcicek. 2016. Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 45–51.
  80. Ondřej Dušek and Zdeněk Kasner. 2020. Evaluating Semantic Accuracy of Data-to-Text Generation with Natural Language Inference. In Proceedings of the 13th International Conference on Natural Language Generation. 131–137.
  81. Findings of the E2E NLG Challenge. In Proceedings of the 11th International Conference on Natural Language Generation. 322–328.
  82. Designing a Symbolic Intermediate Representation for Neural Surface Realization. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation. 65–73.
  83. Henry Elder and Chris Hokamp. 2018. Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models. In Proceedings of the First Workshop on Multilingual Surface Realisation. Association for Computational Linguistics, 49–53. https://doi.org/10.18653/v1/W18-3606
  84. MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines. In Proceedings of the 12th Language Resources and Evaluation Conference. 422–428.
  85. Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the Eye of the User: A Critique of NLP Leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 4846–4853.
  86. Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing.
  87. Christiane Fellbaum. 1998. A semantic network of english: the mother of all WordNets. In EuroWordNet: A multilingual database with lexical semantic networks. Springer, 137–148.
  88. Linguistic realisation as machine translation: Comparing different MT models for AMR-to-text generation. In Proceedings of the 10th International Conference on Natural Language Generation. 1–10.
  89. NeuralREG: An end-to-end approach to referring expression generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1959–1969.
  90. Enriching the WebNLG corpus. In Proceedings of the 11th International Conference on Natural Language Generation. 171–176.
  91. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 552–562.
  92. Enriching the E2E dataset. In Proceedings of the 14th International Conference on Natural Language Generation. 177–183.
  93. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting of the association for computational linguistics (ACL’05). 363–370.
  94. Mary Ellen Foster. 2019. Natural language generation for social robotics: opportunities and challenges. Philosophical Transactions of the Royal Society B 374, 1771 (2019), 20180027.
  95. Markus Freitag and Scott Roy. 2018. Unsupervised Natural Language Generation with Denoising Autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 3922–3929.
  96. Unified Pragmatic Models for Generating and Following Instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 1951–1963.
  97. Latent Template Induction with Gumbel-CRFs. Advances in Neural Information Processing Systems 33 (2020), 20259–20271.
  98. Partially-Aligned Data-to-Text Generation with Distant Supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 9183–9193.
  99. Creating Training Corpora for NLG Micro-Planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 179–188. https://doi.org/10.18653/v1/P17-1017
  100. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research 61 (2018), 65–170.
  101. Ruifang Ge and Raymond Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). 9–16.
  102. The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021). 96–120.
  103. End-to-End Content and Plan Selection for Data-to-Text Generation. In Proceedings of the 11th International Conference on Natural Language Generation. 46–56.
  104. Nahum Gershon and Ward Page. 2001. What storytelling can do for information visualization. Commun. ACM 44, 8 (2001), 31–37.
  105. How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). 71–79.
  106. Dimitra Gkatzia and Saad Mahamood. 2015. A snapshot of NLG evaluation practices 2005-2014. In Proceedings of the 15th European Workshop on Natural Language Generation (ENLG). 57–60.
  107. Enhancing content planning for table-to-text generation with data understanding and verification. In Findings of the Association for Computational Linguistics: EMNLP 2020. 2905–2914.
  108. Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time). In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3143–3152.
  109. Tablegpt: Few-shot table-to-text generation with table structure reconstruction and content matching. In Proceedings of the 28th International Conference on Computational Linguistics. 1978–1988.
  110. Regularization for deep learning. Deep learning (2016), 216–261.
  111. Generative adversarial nets. Advances in neural information processing systems 27 (2014).
  112. Natural Language Generation through Character-based RNNs with Finite-state Prior Knowledge.. In COLING. 1083–1092.
  113. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. 33–41.
  114. Hybrid computing using a neural network with dynamic external memory. Nature 538, 7626 (2016), 471–476.
  115. Thomas RG Green and Marian Petre. 1992. When visual programs are harder to read than textual programs. In Human-Computer Interaction: Tasks and Organisation, Proceedings ECCE-6 (6th European Conference Cognitive Ergonomics), Vol. 57. Citeseer.
  116. Comprehensibility of visual and textual programs: a test of superlativism against the’match-mismatch’conjecture. Open University, Computer Assisted Learning Research Group.
  117. Herbert P Grice. 1975. Logic and conversation. In Speech acts. Brill, 41–58.
  118. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics 21, 2 (1995), 203–225.
  119. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1631–1640.
  120. Levenshtein transformer. Advances in Neural Information Processing Systems 32 (2019).
  121. Pointing the Unknown Words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 140–149.
  122. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 8342–8360.
  123. Retrieval Augmented Language Model Pre-Training. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 3929–3938.
  124. Multiple choice learning: Learning to produce multiple structured outputs. Advances in neural information processing systems 25 (2012).
  125. Michael Alexander Kirkwood Halliday and Ruqaiya Hasan. 2014. Cohesion in english. Routledge.
  126. Synthetic Data in AI: Challenges, Applications, and Ethical Implications. arXiv preprint arXiv:2401.01629 (2024).
  127. Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity. In Proceedings of the 28th International Conference on Computational Linguistics. 2410–2424.
  128. Unifying Human and Statistical Evaluation for Natural Language Generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 1689–1701.
  129. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
  130. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 199–208.
  131. Xiaodong He and Li Deng. 2017. Deep Learning for Image-to-Text Generation: A Technical Overview. IEEE Signal Processing Magazine 34, 6 (2017), 109–116. https://doi.org/10.1109/MSP.2017.2741510
  132. TrueSkill™: a Bayesian skill rating system. Advances in neural information processing systems 19 (2006).
  133. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  134. Laurence Horn. 1984. Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. Meaning, form, and use in context: Linguistic applications 11 (1984), 42.
  135. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In Proceedings of the 13th International Conference on Natural Language Generation. 169–182.
  136. The Extended SPaRKy Restaurant Corpus: Designing a Corpus with Variable Information Density.. In INTERSPEECH. 3757–3761.
  137. Toward controlled generation of text. In International conference on machine learning. PMLR, 1587–1596.
  138. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4700–4708.
  139. Learning to select, track, and generate for data-to-text. Journal of Natural Language Processing 27, 3 (2020), 599–626.
  140. Sequence-to-Sequence Models for Data-to-Text Natural Language Generation: Word-vs. Character-based Processing and Output Diversity. In Proceedings of the 11th International Conference on Natural Language Generation. 221–232.
  141. Categorical reparametrization with gumble-softmax. In International Conference on Learning Representations (ICLR 2017).
  142. Survey of Hallucination in Natural Language Generation. arXiv preprint arXiv:2202.03629 (2022).
  143. Robert L Johnson and Grant B Morgan. 2016. Survey scales: A guide to development, analysis, and reporting. Guilford Publications.
  144. Search and learn: improving semantic coverage for data-to-text generation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 10858–10866.
  145. Semantics-based machine translation with hyperedge replacement grammars. In Proceedings of COLING 2012. 1359–1376.
  146. Likert scale: Explored and explained. British journal of applied science & technology 7, 4 (2015), 396.
  147. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 6282–6293.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mandar Sharma (9 papers)
  2. Ajay Gogineni (1 paper)
  3. Naren Ramakrishnan (72 papers)
Citations (9)