Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation (2211.09455v1)

Published 17 Nov 2022 in cs.CL

Abstract: Evaluating automatically generated text is generally hard due to the inherently subjective nature of many aspects of the output quality. This difficulty is compounded in automatic consultation note generation by differing opinions between medical experts both about which patient statements should be included in generated notes and about their respective importance in arriving at a diagnosis. Previous real-world evaluations of note-generation systems saw substantial disagreement between expert evaluators. In this paper we propose a protocol that aims to increase objectivity by grounding evaluations in Consultation Checklists, which are created in a preliminary step and then used as a common point of reference during quality assessment. We observed good levels of inter-annotator agreement in a first evaluation study using the protocol; further, using Consultation Checklists produced in the study as reference for automatic metrics such as ROUGE or BERTScore improves their correlation with human judgements compared to using the original human note.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Aleksandar Savkov (10 papers)
  2. Francesco Moramarco (8 papers)
  3. Alex Papadopoulos Korfiatis (6 papers)
  4. Mark Perera (3 papers)
  5. Anya Belz (17 papers)
  6. Ehud Reiter (31 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.