Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Sequence-to-Sequence Modeling with Attention by Leveraging Deep Learning Architectures for Enhanced Contextual Understanding in Abstractive Text Summarization (2404.08685v1)

Published 8 Apr 2024 in cs.CL and cs.LG

Abstract: Automatic text summarization (TS) plays a pivotal role in condensing large volumes of information into concise, coherent summaries, facilitating efficient information retrieval and comprehension. This paper presents a novel framework for abstractive TS of single documents, which integrates three dominant aspects: structural, semantic, and neural-based approaches. The proposed framework merges machine learning and knowledge-based techniques to achieve a unified methodology. The framework consists of three main phases: pre-processing, machine learning, and post-processing. In the pre-processing phase, a knowledge-based Word Sense Disambiguation (WSD) technique is employed to generalize ambiguous words, enhancing content generalization. Semantic content generalization is then performed to address out-of-vocabulary (OOV) or rare words, ensuring comprehensive coverage of the input document. Subsequently, the generalized text is transformed into a continuous vector space using neural language processing techniques. A deep sequence-to-sequence (seq2seq) model with an attention mechanism is employed to predict a generalized summary based on the vector representation. In the post-processing phase, heuristic algorithms and text similarity metrics are utilized to refine the generated summary further. Concepts from the generalized summary are matched with specific entities, enhancing coherence and readability. Experimental evaluations conducted on prominent datasets, including Gigaword, Duc 2004, and CNN/DailyMail, demonstrate the effectiveness of the proposed framework. Results indicate significant improvements in handling rare and OOV words, outperforming existing state-of-the-art deep learning techniques. The proposed framework presents a comprehensive and unified approach towards abstractive TS, combining the strengths of structure, semantics, and neural-based methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
Citations (2)