Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meeting Summarization with Pre-training and Clustering Methods (2111.08210v1)

Published 16 Nov 2021 in cs.CL

Abstract: Automatic meeting summarization is becoming increasingly popular these days. The ability to automatically summarize meetings and to extract key information could greatly increase the efficiency of our work and life. In this paper, we experiment with different approaches to improve the performance of query-based meeting summarization. We started with HMNet\cite{hmnet}, a hierarchical network that employs both a word-level transformer and a turn-level transformer, as the baseline. We explore the effectiveness of pre-training the model with a large news-summarization dataset. We investigate adding the embeddings of queries as a part of the input vectors for query-based summarization. Furthermore, we experiment with extending the locate-then-summarize approach of QMSum\cite{qmsum} with an intermediate clustering step. Lastly, we compare the performance of our baseline models with BART, a state-of-the-art LLM that is effective for summarization. We achieved improved performance by adding query embeddings to the input of the model, by using BART as an alternative LLM, and by using clustering methods to extract key information at utterance level before feeding the text into summarization models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andras Huebner (1 paper)
  2. Wei Ji (202 papers)
  3. Xiang Xiao (62 papers)
Citations (1)