We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT

Abstract: The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained language models, such models often require substantial amounts of medical text data and have poor generalization performance. While large language models (LLMs) like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.
Comments: Change to the published version. "ImpressionGPT" has been removed from the title
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
MSC classes: 68T50, 68T37, 68T20
ACM classes: I.2.7
Journal reference: IEEE Transactions on Artificial Intelligence (Early Access)(12 February 2024)
DOI: 10.1109/TAI.2024.3364586
Cite as: arXiv:2304.08448 [cs.CL]
  (or arXiv:2304.08448v3 [cs.CL] for this version)

Submission history

From: Chong Ma [view email]
[v1] Mon, 17 Apr 2023 17:13:42 GMT (390kb,D)
[v2] Wed, 3 May 2023 08:09:53 GMT (654kb,D)
[v3] Wed, 8 May 2024 04:22:26 GMT (1360kb,D)

Link back to: arXiv, form interface, contact.