Detection of Suicidality Through Privacy-Preserving Large Language Models

Publikation: Vorabdruck/Dokumentation/BerichtVorabdruck (Preprint)

Abstract

ImportanceAttempts to use Artificial Intelligence (AI) in psychiatric disorders show moderate success, high-lighting the potential of incorporating information from clinical assessments to improve the models. The study focuses on using Large Language Models (LLMs) to manage unstructured medical text, particularly for suicide risk detection in psychiatric care. ObjectiveThe study aims to extract information about suicidality status from the admission notes of electronic health records (EHR) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models. Main Outcomes and MeasuresThe study compares the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity, and F1 score across different prompting strategies. ResultsA German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs. Conclusions and RelevanceThe study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting the information on suicidality from psychiatric records while preserving data-privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research. Key PointsO_ST_ABSQuestionC_ST_ABSCan large language models (LLMs) accurately extract information on suicidality from electronic health records (EHR)? FindingsIn this analysis of 100 psychiatric admission notes using Llama-2 models, the German fine-tuned model (Emgerman) demonstrated the highest accuracy (87.5%), sensitivity (83%) and specificity (91.8%) in identifying suicidality, indicating the models effectiveness in on-site processing of clinical documentation for suicide risk detection. MeaningThe study highlights the effectiveness of LLMs, particularly Llama-2, in accurately extracting the information on suicidality from psychiatric records, while preserving data privacy. It recommends further evaluating these models to integrate them into clinical management systems to improve detection of psychiatric emergencies and enhance systematic quality control and research in mental health care.

Details

OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 8 März 2024
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper

Externe IDs

ORCID /0000-0002-3974-7115/work/158766091
ORCID /0000-0002-2666-859X/work/158766708
ORCID /0000-0002-3415-5583/work/158767150
ORCID /0000-0002-6808-2968/work/158767971

Schlagworte

Ziele für nachhaltige Entwicklung

Schlagwörter

  • psychiatry and clinical psychology