Detection of suicidality from medical text using privacy-preserving large language models

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

Background
Attempts to use artificial intelligence (AI) in psychiatric disorders show moderate success, highlighting the potential of incorporating information from clinical assessments to improve the models. This study focuses on using large language models (LLMs) to detect suicide risk from medical text in psychiatric care.

Aims
To extract information about suicidality status from the admission notes in electronic health records (EHRs) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models.

Method
We compared the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from 100 psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity and F1 score across different prompting strategies.

Results
A German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83.0%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs.

Conclusions
The study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting information on suicidality from psychiatric records while preserving data privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research.

Details

Original languageEnglish
JournalBritish Journal of Psychiatry
Publication statusE-pub ahead of print - 5 Nov 2024
Peer-reviewedYes

External IDs

ORCID /0000-0002-3415-5583/work/171553716
ORCID /0000-0002-2666-859X/work/171553753
ORCID /0000-0002-3974-7115/work/171553864
ORCID /0000-0002-6808-2968/work/171554061

Keywords

Sustainable Development Goals