Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/169569
Title: | Deep learning-based recognition and segmentation of intracranial aneurysms under small sample size | Authors: | Zhu, Guangyu Luo, Xueqi Yang, Tingting Cai, Li Yeo, Joon Hock Yan, Ge Yang, Jian |
Keywords: | Engineering::Mechanical engineering | Issue Date: | 2022 | Source: | Zhu, G., Luo, X., Yang, T., Cai, L., Yeo, J. H., Yan, G. & Yang, J. (2022). Deep learning-based recognition and segmentation of intracranial aneurysms under small sample size. Frontiers in Physiology, 13, 1084202-. https://dx.doi.org/10.3389/fphys.2022.1084202 | Journal: | Frontiers in Physiology | Abstract: | The manual identification and segmentation of intracranial aneurysms (IAs) involved in the 3D reconstruction procedure are labor-intensive and prone to human errors. To meet the demands for routine clinical management and large cohort studies of IAs, fast and accurate patient-specific IA reconstruction becomes a research Frontier. In this study, a deep-learning-based framework for IA identification and segmentation was developed, and the impacts of image pre-processing and convolutional neural network (CNN) architectures on the framework's performance were investigated. Three-dimensional (3D) segmentation-dedicated architectures, including 3D UNet, VNet, and 3D Res-UNet were evaluated. The dataset used in this study included 101 sets of anonymized cranial computed tomography angiography (CTA) images with 140 IA cases. After the labeling and image pre-processing, a training set and test set containing 112 and 28 IA lesions were used to train and evaluate the convolutional neural network mentioned above. The performances of three convolutional neural networks were compared in terms of training performance, segmentation performance, and segmentation efficiency using multiple quantitative metrics. All the convolutional neural networks showed a non-zero voxel-wise recall (V-Recall) at the case level. Among them, 3D UNet exhibited a better overall segmentation performance under the relatively small sample size. The automatic segmentation results based on 3D UNet reached an average V-Recall of 0.797 ± 0.140 (3.5% and 17.3% higher than that of VNet and 3D Res-UNet), as well as an average dice similarity coefficient (DSC) of 0.818 ± 0.100, which was 4.1%, and 11.7% higher than VNet and 3D Res-UNet. Moreover, the average Hausdorff distance (HD) of the 3D UNet was 3.323 ± 3.212 voxels, which was 8.3% and 17.3% lower than that of VNet and 3D Res-UNet. The three-dimensional deviation analysis results also showed that the segmentations of 3D UNet had the smallest deviation with a max distance of +1.4760/-2.3854 mm, an average distance of 0.3480 mm, a standard deviation (STD) of 0.5978 mm, a root mean square (RMS) of 0.7269 mm. In addition, the average segmentation time (AST) of the 3D UNet was 0.053s, equal to that of 3D Res-UNet and 8.62% shorter than VNet. The results from this study suggested that the proposed deep learning framework integrated with 3D UNet can provide fast and accurate IA identification and segmentation. | URI: | https://hdl.handle.net/10356/169569 | ISSN: | 1664-042X | DOI: | 10.3389/fphys.2022.1084202 | Schools: | School of Mechanical and Aerospace Engineering | Rights: | © 2022 Zhu, Luo, Yang, Cai, Yeo, Yan and Yang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | MAE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
fphys-13-1084202.pdf | 2.55 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
21
Updated on May 4, 2025
Web of ScienceTM
Citations
50
2
Updated on Oct 31, 2023
Page view(s)
134
Updated on May 4, 2025
Download(s) 50
81
Updated on May 4, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.