Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward Abstractive Summarization Using Semantic Representations (1805.10399v1)

Published 25 May 2018 in cs.CL

Abstract: We present a novel abstractive summarization framework that draws on the recent development of a treebank for the Abstract Meaning Representation (AMR). In this framework, the source text is parsed to a set of AMR graphs, the graphs are transformed into a summary graph, and then text is generated from the summary graph. We focus on the graph-to-graph transformation that reduces the source semantic graph into a summary graph, making use of an existing AMR parser and assuming the eventual availability of an AMR-to-text generator. The framework is data-driven, trainable, and not specifically designed for a particular domain. Experiments on gold-standard AMR annotations and system parses show promising results. Code is available at: https://github.com/summarization

Toward Abstractive Summarization Using Semantic Representations

The paper "Toward Abstractive Summarization Using Semantic Representations" introduces a novel framework for abstractive summarization that capitalizes on advancements in Abstract Meaning Representation (AMR). Researchers from Carnegie Mellon University propose a data-driven, trainable approach, domain-independent in its design, which facilitates the generation of text summaries from AMR-derived semantic graphs. This work marks the first investigation of abstractive summarization through the transformation of AMR graphs.

Framework Overview

The proposed summarization framework comprises several key steps:

  1. AMR Parsing: The input text is parsed into AMR graphs using existing tools such as JAMR, which achieves a 63% F-score on concept and relation predictions in this context.
  2. Graph Transformation: This stage transforms sentence-level AMR graphs into a single coherent summary graph. The focus in this paper is on the graph summarization step, which is framed as a structured prediction task.
  3. Text Generation: Finally, summarization involves generating text from the constructed summary graph. Future work is proposed to explore this component further.

The process employs a rich feature set to predict subgraphs from source graphs, merging AMR graphs of sentences based on shared concepts and using integer linear programming (ILP) to ensure the generated subgraph remains connected and tree-structured. The resulting summary graph represents the most salient semantic content from the original text.

Experimental Evaluation

The evaluations conducted deploy AMR Bank's proxy report section and measure both intrinsic graph transformation quality and summary term selection using ROUGE-1 scoring. The use of gold-standard AMR annotations and JAMR system parses shows that the framework's predictive capacity is promising, with ramp loss leading to substantial improvements in both subgraph prediction and ROUGE-1 scores.

Implications and Future Directions

Graph-based methods can reduce redundancy by collapsing coreferent concepts and offer a more refined summarization approach than traditional extractive methods. This work also provides valuable insights into the construction of a semantic-level summary that retains the intent and salient points from the source without falling into paraphrasing or redundancy. However, some limitations remain, notably in semantic generation, graph expansion impacts, and node-edge relationship precision.

The research underscores the importance of semantic analysis in abstractive summarization, paving the way for further investigation into full-spectrum architectures integrating AMR parsing, AMR graph summarization, and AMR-to-text generation. The results imply potential applications spanning complex documents, such as legal texts, multimedia content, and other domains where concise, coherent narration is critical.

Overall, this paper lays the groundwork for significant advancements in the field of abstractive summarization with a robust theoretical and computational foundation, providing a baseline for future exploratory and application-focused studies in semantic representation-based summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Fei Liu (232 papers)
  2. Jeffrey Flanigan (18 papers)
  3. Sam Thomson (15 papers)
  4. Norman Sadeh (19 papers)
  5. Noah A. Smith (224 papers)
Citations (292)