Vision-language models for automated video analysis and documentation in laparoscopic surgery: a proof-of-concept study

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

BACKGROUND: The ongoing shortage of medical personnel highlights the urgent need to automate clinical documentation and reduce administrative burden. Large Vision-Language Models (VLMs) offer promising potential for supporting surgical documentation and intraoperative analysis.

METHODS: We conducted an observational, comparative performance study of two general-purpose VLMs-GPT-4o (OpenAI) and Gemini-1.5-pro (Google)-from June to September 2024, using 15 cholecystectomy and 15 appendectomy videos (1 fps) from the CholecT45 and LapApp datasets. Tasks included object detection (vessel clips, gauze, retrieval bags, bleeding), surgery type classification, appendicitis grading, and surgical report generation. In-context learning (ICL) was evaluated as an enhancement method. Performance was assessed using descriptive accuracy metrics.

RESULTS: Both models identified vessel clips with 100% accuracy. GPT-4o outperformed Gemini-1.5-pro in retrieval bag (100% vs. 93.3%) and gauze detection (93.3% vs. 60%), while Gemini-1.5-pro showed better results in bleeding detection (93.3% vs. 86.7%). In surgery classification, Gemini-1.5-pro was more accurate for cholecystectomies (93% vs. 80%), with both models achieving 60% accuracy for appendectomies. Appendicitis grading showed limited performance (GPT-4o: 40%, Gemini-1.5-pro: 26.7%). For surgical reports, GPT-4o produced for CCE more complete outputs (CCE: 90.4%, APE: 80.1%), while Gemini-1.5-pro achieved higher correctness overall (CCE: 71.1%, APE: 69.6%). ICL notably improved tool recognition (e.g., in APE step 4, GPT-4o improved from 69.2% to 80%), though its effect on organ removal step recognition was inconsistent.

CONCLUSION: GPT-4o and Gemini-1.5-pro performed reliably in object detection and procedure classification but showed limitations in grading pathology and accurately describing procedural steps, which could be enhanced through in-context learning. This shows that domain-agnostic VLMs can be applied to surgical video analysis. In the future, VLMs with domain knowledge can be envisioned to enhance the operating room in the form of companions.

Details

Original languageEnglish
JournalInternational journal of surgery (London, England)
Volume111
Issue number11
Publication statusE-pub ahead of print - 17 Jul 2025
Peer-reviewedYes

External IDs

unpaywall 10.1097/js9.0000000000003069
ORCID /0000-0002-3730-5348/work/198594695

Keywords