Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer Based Implementation for Automatic Book Summarization (2301.07057v1)

Published 17 Jan 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Document Summarization is the procedure of generating a meaningful and concise summary of a given document with the inclusion of relevant and topic-important points. There are two approaches: one is picking up the most relevant statements from the document itself and adding it to the Summary known as Extractive and the other is generating sentences for the Summary known as Abstractive Summarization. Training a machine learning model to perform tasks that are time-consuming or very difficult for humans to evaluate is a major challenge. Book Abstract generation is one of such complex tasks. Traditional machine learning models are getting modified with pre-trained transformers. Transformer based LLMs trained in a self-supervised fashion are gaining a lot of attention; when fine-tuned for Natural Language Processing(NLP) downstream task like text summarization. This work is an attempt to use Transformer based techniques for Abstract generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Siddhant Porwal (1 paper)
  2. Laxmi Bewoor (1 paper)
  3. Vivek Deshpande (1 paper)
Citations (2)