Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective (2310.19233v3)

Published 30 Oct 2023 in cs.CL

Abstract: This paper studies how to effectively build meeting summarization systems for real-world usage using LLMs. For this purpose, we conduct an extensive evaluation and comparison of various closed-source and open-source LLMs, namely, GPT-4, GPT- 3.5, PaLM-2, and LLaMA-2. Our findings reveal that most closed-source LLMs are generally better in terms of performance. However, much smaller open-source models like LLaMA- 2 (7B and 13B) could still achieve performance comparable to the large closed-source models even in zero-shot scenarios. Considering the privacy concerns of closed-source models for only being accessible via API, alongside the high cost associated with using fine-tuned versions of the closed-source models, the opensource models that can achieve competitive performance are more advantageous for industrial use. Balancing performance with associated costs and privacy concerns, the LLaMA-2-7B model looks more promising for industrial usage. In sum, this paper offers practical insights on using LLMs for real-world business meeting summarization, shedding light on the trade-offs between performance and cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Md Tahmid Rahman Laskar (30 papers)
  2. Xue-Yong Fu (11 papers)
  3. Cheng Chen (262 papers)
  4. Shashi Bhushan TN (9 papers)
Citations (25)