Benchmarking vision-language models for diagnostics in emergency and critical care settings
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
The applicability of vision-language models (VLMs) for acute care in emergency and intensive care units remains underexplored. Using a multimodal dataset of diagnostic questions involving medical images and clinical context, we benchmarked several small open-source VLMs against GPT-4o. While open models demonstrated limited diagnostic accuracy (up to 40.4%), GPT-4o significantly outperformed them (68.1%). Findings highlight the need for specialized training and optimization to improve open-source VLMs for acute care applications.
Details
| Original language | English |
|---|---|
| Article number | 423 |
| Journal | npj digital medicine |
| Volume | 8 |
| Issue number | 1 |
| Publication status | Published - Dec 2025 |
| Peer-reviewed | Yes |
External IDs
| ORCID | /0000-0002-3730-5348/work/198594679 |
|---|