Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Systematic Survey of Text Summarization: From Statistical Methods to Large Language Models (2406.11289v1)

Published 17 Jun 2024 in cs.CL

Abstract: Text summarization research has undergone several significant transformations with the advent of deep neural networks, pre-trained LLMs (PLMs), and recent LLMs. This survey thus provides a comprehensive review of the research progress and evolution in text summarization through the lens of these paradigm shifts. It is organized into two main parts: (1) a detailed overview of datasets, evaluation metrics, and summarization methods before the LLM era, encompassing traditional statistical methods, deep learning approaches, and PLM fine-tuning techniques, and (2) the first detailed examination of recent advancements in benchmarking, modeling, and evaluating summarization in the LLM era. By synthesizing existing literature and presenting a cohesive overview, this survey also discusses research trends, open challenges, and proposes promising research directions in summarization, aiming to guide researchers through the evolving landscape of summarization research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Haopeng Zhang (32 papers)
  2. Philip S. Yu (592 papers)
  3. Jiawei Zhang (529 papers)
Citations (4)