Deep-DM: Deep-driven deformable model for 3D image segmentation using limited data

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

Abstract

Objective - Medical image segmentation is essential for several clinical tasks, including diagnosis, surgical and treatment planning, and image-guided interventions. Deep Learning (DL) methods have become the state-of-the-art for several image segmentation scenarios. However, a large and well-annotated dataset is required to effectively train a DL model, which is usually difficult to obtain in clinical practice, especially for 3D images. Methods - In this paper, we proposed Deep-DM, a learning-guided deformable model framework for 3D medical imaging segmentation using limited training data. In the proposed method, an energy function is learned by a Convolutional Neural Network (CNN) and integrated into an explicit deformable model to drive the evolution of an initial surface towards the object to segment. Specifically, the learning-based energy function is iteratively retrieved from localized anatomical representations of the image containing the image information around the evolving surface at each iteration. By focusing on localized regions of interest, this representation excludes irrelevant image information, facilitating the learning process. Results and conclusion - The performance of the proposed method is demonstrated for the tasks of left ventricle and fetal head segmentation in ultrasound, left atrium segmentation in Magnetic Resonance, and bladder segmentation in Computed Tomography, using different numbers of training volumes in each study. The results obtained showed the feasibility of the proposed method to segment different anatomical structures in different imaging modalities. Moreover, the results also showed that the proposed approach is less dependent on the size of the training dataset in comparison with state-of-the-art DL-based segmentation methods, outperforming them for all tasks when a low number of samples is available. Significance - Overall, by offering a more robust and less data-intensive approach to accurately segmenting anatomical structures, the proposed method has the potential to enhance clinical tasks that require image segmentation strategies.

Details

OriginalspracheEnglisch
Seiten (von - bis)7287-7299
Seitenumfang13
FachzeitschriftIEEE Journal of Biomedical and Health Informatics
Jahrgang28
Ausgabenummer12
PublikationsstatusVeröffentlicht - 2024
Peer-Review-StatusJa

Externe IDs

Mendeley ce528504-73d9-3789-b7e6-a160878e4f3f

Schlagworte

Schlagwörter

  • 3D segmentation, Anatomic representation, Anatomical structure, Annotations, Deformable models, Image segmentation, Learning-driven deformable model, Limited training data, Task analysis, Three-dimensional displays, Training