Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Analysis of Abstractive Text Summarization Using Pre-trained Models (2303.12796v1)

Published 25 Feb 2023 in cs.CL, cs.AI, and cs.LG

Abstract: People nowadays use search engines like Google, Yahoo, and Bing to find information on the Internet. Due to explosion in data, it is helpful for users if they are provided relevant summaries of the search results rather than just links to webpages. Text summarization has become a vital approach to help consumers swiftly grasp vast amounts of information.In this paper, different pre-trained models for text summarization are evaluated on different datasets. Specifically, we have used three different pre-trained models, namely, google/pegasus-cnn-dailymail, T5-base, facebook/bart-large-cnn. We have considered three different datasets, namely, CNN-dailymail, SAMSum and BillSum to get the output from the above three models. The pre-trained models are compared over these different datasets, each of 2000 examples, through ROUGH and BLEU metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tohida Rehman (11 papers)
  2. Suchandan Das (3 papers)
  3. Debarshi Kumar Sanyal (21 papers)
  4. Samiran Chattopadhyay (23 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.