Current applications and challenges in large language models for patient care: a systematic review

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Felix Busch - , Technical University of Munich (Author)
  • Lena Hoffmann - , Charité – Universitätsmedizin Berlin (Author)
  • Christopher Rueger - , Charité – Universitätsmedizin Berlin (Author)
  • Elon H.C. van Dijk - , Leiden University, Sir Charles Gairdner Hospital (Author)
  • Rawen Kader - , University College London (Author)
  • Esteban Ortiz-Prado - , Universidad de las Américas - Ecuador (Author)
  • Marcus R. Makowski - , Technical University of Munich (Author)
  • Luca Saba - , University Hospital of Cagliari (Author)
  • Martin Hadamitzky - , Technical University of Munich (Author)
  • Jakob Nikolas Kather - , Else Kröner Fresenius Center for Digital Health, National Center for Tumor Diseases (NCT) Heidelberg (Author)
  • Daniel Truhn - , RWTH Aachen University (Author)
  • Renato Cuocolo - , University of Salerno (Author)
  • Lisa C. Adams - , Technical University of Munich (Author)
  • Keno K. Bressem - , Technical University of Munich (Author)

Abstract

Background: The introduction of large language models (LLMs) into clinical practice promises to improve patient education and empowerment, thereby personalizing medical care and broadening access to medical knowledge. Despite the popularity of LLMs, there is a significant gap in systematized information on their use in patient care. Therefore, this systematic review aims to synthesize current applications and limitations of LLMs in patient care. Methods: We systematically searched 5 databases for qualitative, quantitative, and mixed methods articles on LLMs in patient care published between 2022 and 2023. From 4349 initial records, 89 studies across 29 medical specialties were included. Quality assessment was performed using the Mixed Methods Appraisal Tool 2018. A data-driven convergent synthesis approach was applied for thematic syntheses of LLM applications and limitations using free line-by-line coding in Dedoose. Results: We show that most studies investigate Generative Pre-trained Transformers (GPT)-3.5 (53.2%, n = 66 of 124 different LLMs examined) and GPT-4 (26.6%, n = 33/124) in answering medical questions, followed by patient information generation, including medical text summarization or translation, and clinical documentation. Our analysis delineates two primary domains of LLM limitations: design and output. Design limitations include 6 second-order and 12 third-order codes, such as lack of medical domain optimization, data transparency, and accessibility issues, while output limitations include 9 second-order and 32 third-order codes, for example, non-reproducibility, non-comprehensiveness, incorrectness, unsafety, and bias. Conclusions: This review systematically maps LLM applications and limitations in patient care, providing a foundational framework and taxonomy for their implementation and evaluation in healthcare settings.

Details

Original languageEnglish
Article number26
Number of pages13
JournalCommunications medicine
Volume5 (2025)
Issue number1
Publication statusPublished - 21 Jan 2025
Peer-reviewedYes

External IDs

PubMed 39838160
Mendeley c38a0e4c-3cbc-3c1f-8105-4f703ddaacda
ORCID /0000-0002-3730-5348/work/198594664