Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization (2110.07356v1)

Published 9 Sep 2021 in cs.CL, cs.AI, and cs.LG

Abstract: In medical dialogue summarization, summaries must be coherent and must capture all the medically relevant information in the dialogue. However, learning effective models for summarization require large amounts of labeled data which is especially hard to obtain. We present an algorithm to create synthetic training data with an explicit focus on capturing medically relevant information. We utilize GPT-3 as the backbone of our algorithm and scale 210 human labeled examples to yield results comparable to using 6400 human labeled examples (~30x) leveraging low-shot learning and an ensemble method. In detailed experiments, we show that this approach produces high quality training data that can further be combined with human labeled data to get summaries that are strongly preferable to those produced by models trained on human data alone both in terms of medical accuracy and coherency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bharath Chintagunta (3 papers)
  2. Namit Katariya (9 papers)
  3. Xavier Amatriain (20 papers)
  4. Anitha Kannan (29 papers)
Citations (133)

Summary

We haven't generated a summary for this paper yet.