Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Xiaohongen_US
dc.identifier.citationWang, X. (2020). Medical image segmentation based on deep feature learning and multistage classification. Doctoral thesis, Nanyang Technological University, Singapore.en_US
dc.description.abstractAutomatic segmentation of target organs in medical image plays a crucial role in the computer-aided diagnosis of human diseases, like retinal vessel from fundus image, and melanoma from dermoscopic image. Manual segmentation of those tissues is a time-consuming and labor-intensive task that is not feasible for clinicians to annotate a large amount of medical images by hand. Thus, a reliable medical image segmentation system with lower cost and less human interaction than the observation-based techniques is attractive in the field of medical image analysis. This thesis investigates medical image segmentation architectures through deep feature learning and multistage classification. Specifically, we proposed two novel multistage classification schemes for retinal vessel segmentation. For more complex and large dermoscopic image data analysis issue, we proposed a novel deep feature learning scheme for skin lesion segmentation. Details of the completed works are summarized as follows: In the first scheme, a novel and robust retinal vessel segmentation framework is proposed, which envelops a set of computationally efficient Mahalanobis distance classifiers to form a highly nonlinear decision. Different from other nonlinear classifiers that need a predefined nonlinear kernel or an iterative training, the proposed cascade classification framework is trained by a one-pass feedforward process. Thus, the degree of nonlinearity of the proposed classifier is not predefined, but determined by the complexity of the data structure. Experimental evaluations on three publicly available datasets show that the proposed cascade classification framework achieves high vessel segmentation accuracy consistently on all three diverse datasets. In the second scheme, a hierarchical architecture for retinal vessel segmentation based on a divide-and-conquer strategy is designed. Current works for retinal vessel segmentation typically train a global discriminative model for retinal vessel classification that is still not sufficient to fit the complex pattern of vessel structure. In fact, the large geometrical structure difference among retinal vessels with different scales and positions greatly limits the precision of the decision boundary of the global discriminative model. To overcome this problem, an efficient dividing algorithm, named multiplex vessel partition (MVP), is proposed to divide the retinal vessel into well constrained subsets where vessel samples with the same geometrical property are assigned together. Then, a set of homogeneous classifiers are trained in parallel to form discriminative decision for each subset. Moreover, a funnel-structured vessel segmentation (FsVS) framework is proposed to link the classification results from each disjoint subset. It reduces the probability of poor partition at the dividing phase and further enhances the discriminative capability of the decision model. Both quantitative and qualitative experimental comparisons on three publicly datasets demonstrate the flexibility and efficiency of the proposed work on retinal vessel segmentation. In the last scheme, a bi-directional feature dermoscopic learning framework with multiscale consistent decision fusion is proposed for skin lesion segmentation. Previously published skin lesion segmentation works enhance lesion detection performance by using deep learning based methods like fully convolutional network (FCN). Nevertheless, relationship between skin lesions and their informative context, as well as the consistency of the decision from multiple classification layers, have not yet been well explored by these previous studies. Different from the naive way of FCN learning an abstract feature representation of image, this thesis proposes a bi-directional dermoscopic feature learning (biDFL) framework that produces rich dermoscopic feature maps by controlling information propagation from two complementary directions at high level parsing layer. With the integration of bi-directional feature information passing, the proposed biDFL module gives better insight to the network about the complex structure of the skin lesion. Furthermore, this thesis proposes a multiscale consistent decision fusion (mCDF) that is capable of selectively focusing on the informative decisions generated from multiple classification layers. By analysis of the consistency of the decision at each position, mCDF automatically adjusts the reliability of decisions and thus allows a more insightful skin lesion delineation. With the embedding of consistency analysis to the decisions from each classification layer, the proposed mCDF assists the network to learn better about which scales of features are more desirable for each pixel. The comprehensive experimental results show the effectiveness of the proposed method on skin lesion segmentation, achieving state-of-the-art performance consistently on two publicly available dermoscopic image datasets.en_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleMedical image segmentation based on deep feature learning and multistage classificationen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorJiang Xudongen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
item.fulltextWith Fulltext-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
Amended thesis wxh.pdf5.62 MBAdobe PDFView/Open

Page view(s)

Updated on Jul 3, 2022

Download(s) 20

Updated on Jul 3, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.