Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT (2304.08448v3)

Published 17 Apr 2023 in cs.CL and cs.AI

Abstract: The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained LLMs, such models often require substantial amounts of medical text data and have poor generalization performance. While LLMs like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Chong Ma (28 papers)
  2. Zihao Wu (100 papers)
  3. Jiaqi Wang (218 papers)
  4. Shaochen Xu (16 papers)
  5. Yaonai Wei (6 papers)
  6. Zhengliang Liu (91 papers)
  7. Xi Jiang (53 papers)
  8. Lei Guo (110 papers)
  9. Xiaoyan Cai (15 papers)
  10. Shu Zhang (286 papers)
  11. Tuo Zhang (46 papers)
  12. Dajiang Zhu (68 papers)
  13. Dinggang Shen (153 papers)
  14. Tianming Liu (161 papers)
  15. Xiang Li (1003 papers)
  16. Fang Zeng (10 papers)
Citations (80)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets