Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Heuristic-based Inter-training to Improve Few-shot Multi-perspective Dialog Summarization (2203.15590v2)

Published 29 Mar 2022 in cs.CL

Abstract: Many organizations require their customer-care agents to manually summarize their conversations with customers. These summaries are vital for decision making purposes of the organizations. The perspective of the summary that is required to be created depends on the application of the summaries. With this work, we study the multi-perspective summarization of customer-care conversations between support agents and customers. We observe that there are different heuristics that are associated with summaries of different perspectives, and explore these heuristics to create weak-labeled data for intermediate training of the models before fine-tuning with scarce human annotated summaries. Most importantly, we show that our approach supports models to generate multi-perspective summaries with a very small amount of annotated data. For example, our approach achieves 94\% of the performance (Rouge-2) of a model trained with the original data, by training only with 7\% of the original data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Benjamin Sznajder (14 papers)
  2. Chulaka Gunasekara (21 papers)
  3. Guy Lev (9 papers)
  4. Sachin Joshi (2 papers)
  5. Eyal Shnarch (15 papers)
  6. Noam Slonim (50 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.