An augmented reality overlay for navigated prostatectomy using fiducial-free 2D–3D registration

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

Purpose: Markerless navigation in minimally invasive surgery is still an unsolved challenge. Many proposed navigation systems for minimally invasive surgeries rely on stereoscopic images, while in clinical practice oftentimes monocular endoscopes are used. Combined with the lack of automatic video-based navigation systems for prostatectomies, this paper explores methods to tackle both research gaps at the same time for robot-assisted prostatectomies. Methods: In order to realize a semi-automatic augmented reality overlay for navigated prostatectomy, the camera pose w.r.t. the prostate needs to be estimated. We developed a method where visual cues are drawn on top of the organ after an initial manual alignment, simultaneously creating matching landmarks on the 2D and 3D data. Starting from this key frame, the cues are then tracked in the endoscopic video. Both PnPRansac and differentiable rendering are then explored to perform 2D–3D registration for each frame. Results: We performed experiments on synthetic and in vivo data. On synthetic data differentiable rendering can achieve a median target registration error of 6.11 mm. Both PnPRansac and differentiable rendering are feasible methods for 2D–3D registration. Conclusion: We demonstrated a video-based markerless augmented reality overlay for navigated prostatectomy, using visual cues as an anchor.

Details

Original languageEnglish
Pages (from-to)1265-1272
Number of pages8
JournalInternational journal of computer assisted radiology and surgery
Volume20
Issue number6
Publication statusPublished - Jun 2025
Peer-reviewedYes

External IDs

PubMed 40341464
ORCID /0000-0002-4590-1908/work/190134675