Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Multi-Document Summarization via Text Classification (1611.09238v1)

Published 28 Nov 2016 in cs.CL and cs.IR

Abstract: Developed so far, multi-document summarization has reached its bottleneck due to the lack of sufficient training data and diverse categories of documents. Text classification just makes up for these deficiencies. In this paper, we propose a novel summarization system called TCSum, which leverages plentiful text classification data to improve the performance of multi-document summarization. TCSum projects documents onto distributed representations which act as a bridge between text classification and summarization. It also utilizes the classification results to produce summaries of different styles. Extensive experiments on DUC generic multi-document summarization datasets show that, TCSum can achieve the state-of-the-art performance without using any hand-crafted features and has the capability to catch the variations of summary styles with respect to different text categories.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ziqiang Cao (34 papers)
  2. Wenjie Li (183 papers)
  3. Sujian Li (84 papers)
  4. Furu Wei (292 papers)
Citations (103)

Summary

We haven't generated a summary for this paper yet.