A preliminary study on evaluating Consultation Notes with Post-Editing

Francesco Moramarco, Aleksandar Savkov, Alex Papadopoulos Korfiatis, Ehud Reiter

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

4 Citations (Scopus)
4 Downloads (Pure)


Automatic summarisation has the potential to aid physicians in streamlining clerical tasks such as note taking. But it is notoriously difficult to evaluate these systems and demonstrate that they are safe to be used in a clinical setting. To circumvent this issue, we propose a semi-automatic approach whereby physicians post-edit generated notes before submitting them. We conduct a preliminary study on the time saving of automatically generated consultation notes with post-editing. Our evaluators are asked to listen to mock consultations and to post-edit three generated notes. We time this and find that it is faster than writing the note from scratch. We present insights and lessons learnt from this experiment.
Original languageEnglish
Title of host publicationProceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)
Subtitle of host publicationEACL 2021
EditorsAnya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorina
Number of pages7
ISBN (Print)978-1-954085-10-7
Publication statusPublished - 19 Apr 2021
EventWorkshop on Human Evaluation of NLP Systems - virtual
Duration: 19 Apr 202119 Apr 2021


WorkshopWorkshop on Human Evaluation of NLP Systems
Internet address


Dive into the research topics of 'A preliminary study on evaluating Consultation Notes with Post-Editing'. Together they form a unique fingerprint.

Cite this