Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation (1703.09902v4)

Published 29 Mar 2017 in cs.CL, cs.AI, and cs.NE
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

Abstract: This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.

Survey of the State of the Art in Natural Language Generation: Core Tasks, Applications, and Evaluation

Overview

The paper by Gatt and Krahmer offers a comprehensive survey of the field of Natural Language Generation (NLG)—a subfield of artificial intelligence focused on generating text or speech from non-linguistic input. The paper targets three key objectives: summarize the core tasks in NLG, illustrate recent applications and developments, and discuss challenges and methodologies in NLG evaluation. This survey covers both rule-based and data-driven approaches, providing insights into emerging trends like neural network approaches and interdisciplinary applications.

Core Tasks in NLG

The traditional breakdown of NLG tasks includes content determination, text structuring, sentence aggregation, lexicalization, referring expression generation (REG), and linguistic realization. Each of these tasks has seen substantial development:

  1. Content Determination: Involves selecting relevant information from the input data. Recent approaches leverage data-driven techniques, such as Hidden Markov Models and clustering, to improve the automatic alignment of data with text.
  2. Text Structuring: Focuses on organizing the selected information logically. Techniques range from domain-specific rules to machine learning models, such as topic modeling and optimization for coherence.
  3. Sentence Aggregation: Combines related messages into coherent sentences. Early work relied heavily on hand-crafted rules; however, modern research often uses data-driven approaches to learn aggregation patterns.
  4. Lexicalization: Deals with word selection to express the chosen content. Recent work includes statistical methods and neural networks to handle lexical choices, considering factors like variability and context sensitivity.
  5. Referring Expression Generation (REG): Generates linguistic expressions to identify entities. Approaches vary from rule-based algorithms to probabilistic models and have been the focus of multiple shared tasks.
  6. Linguistic Realization: Converts sentence plans into grammatically correct text. Advances include hand-crafted grammars, stochastic methods, and integrating neural networks for surface realization.

Architectures and Approaches

NLG systems have evolved from early pipeline architectures to more integrated approaches, often influenced by data-driven and neural methods:

  1. Modular Architectures: Traditional pipeline systems segregate the NLG process into distinct modules (e.g., text planning, sentence planning, and realization). While modular systems offer clarity and separation of concerns, they face issues like error propagation and inefficient handling of feedback.
  2. Planning-Based Approaches: These treat NLG as a problem of goal-oriented action planning. They utilize AI planning formalisms and often provide more flexible, integrated viewpoints of the NLG tasks.
  3. Integrated and Stochastic Approaches: Recent trends focus on learning end-to-end mappings from input to output. These approaches, especially those involving neural networks, integrate tasks like content selection and linguistic realization for improved robustness and flexibility.

Evaluation

Evaluation in NLG remains a multi-faceted challenge due to variability in input and output, and the need for multi-dimensional quality assessment. The survey discusses:

  1. Intrinsic Methods: These include human judgments on fluency and correctness, and automatic metrics such as BLEU, ROUGE, and METEOR that assess n-gram overlap or edit distance between system outputs and reference texts.
  2. Extrinsic Methods: They focus on the effectiveness of generated text in achieving specific goals. These evaluations can be task-based (e.g., decision support, persuasion) and often involve user studies to assess utility.
  3. Glass Box vs. Black Box Evaluation: The former examines the contributions of individual system components, while the latter assesses the overall system performance. Both approaches provide complementary insights into system robustness and efficiency.
  4. Meta-Evaluation: Studies highlight the often low correlation between different evaluation methods, suggesting the necessity of using a diverse array of metrics to capture multiple dimensions of text quality.

Implications and Future Directions

This comprehensive survey illuminates several avenues for future research:

  1. Integration of Multimodal Data: The increasing availability of heterogeneous data sources (text, images, structured data) calls for more advanced techniques that can handle diverse input formats.
  2. Improved Data Acquisition and Alignment: Enhancing methods for automatic alignment of input data with text can significantly benefit the training of data-driven NLG systems.
  3. Interdisciplinary Applications: Collaborative work with fields like computational creativity, computer vision, and cognitive science can enhance the theoretical foundation and practical applications of NLG.
  4. Scalability and Efficiency: Addressing practical challenges in scaling up NLG technology for industrial applications remains critical, especially concerning the efficiency and robustness of data-driven approaches.

Conclusion

The paper by Gatt and Krahmer provides an in-depth look at the current landscape of NLG, examining both foundational aspects and emerging trends. With the growing intersection of AI, linguistics, and various application domains, NLG continues to be a dynamic field with significant potential for future advancements. The survey emphasizes the importance of robust, flexible, and scalable solutions, combined with rigorous evaluation methodologies, to advance the state of the art in NLG research and its applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Albert Gatt (47 papers)
  2. Emiel Krahmer (8 papers)
Citations (778)