Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/66017
Title: Efficient feature extraction and classification for staining patterns of HEP-2 Cells
Authors: Xu, Xiang
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2016
Source: Xu, X. (2016). Efficient feature extraction and classification for staining patterns of HEP-2 Cells. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: The occurrence of antinuclear antibodies (ANAs) in patient serum has significant relation to autoimmune diseases. The ANAs detection can be accomplished via indirect immunofluorescence (IIF) technique using human epithelial (HEp-2) cell as substrate in laboratory. Identification of IIF slide images is based on human visual inspection, but it suffers from serious problems due to subjective evaluation. Therefore, Computer-Aided Diagnosis (CAD) system for supporting doctors' diagnosis is essential. In this thesis, our aim is to develop novel methods to automatically classify positive staining patterns of HEp-2 cells. First, we research on Bag-of-Words (BoW) framework which is one of the most successful image representations. To reduce the inevitable information loss caused by coding process, we propose a Linear Local Distance Coding (LLDC) method. The proposed LLDC method transforms original local feature to more discriminative local distance vector by searching for local neighbors of the local feature in the class-specific manifolds. Then we encode and pool the local distance vectors to get salient image representation. Combined with the traditional coding methods, our proposed method achieves higher classification accuracy. Secondly, we propose a rotation invariant textural feature of Pairwise Local Ternary Patterns with Spatial Rotation Invariant (PLTP-SRI). It is invariant to image rotations, meanwhile it is robust to noise and weak illumination. By adding spatial pyramid structure, our proposed method captures spatial layout information. While the proposed PLTP-SRI feature extracts local feature, the BoW framework builds a global image representation. It is reasonable to combine them together to achieve impressive classification performance, as the combined feature takes the advantages of the two kinds of features in different aspects. Finally, we design a Co-occurrence Differential Texton (CoDT) feature to represent the local image patches of HEp-2 cells. The CoDT feature reduces the information loss by ignoring the quantization while it utilizes the spatial relations among the differential micro-texton feature. Thus it can increase the discriminative power. We build a generative model to adaptively characterize the CoDT feature space of the training data. Furthermore, we exploit a discriminant representation for the HEp-2 cell images based on the adaptive partitioned feature space. Therefore, the resulting representation is adapted to the classification task. By cooperating with linear Support Vector Machine (SVM) classifier, our proposed framework can exploit the advantages of both generative and discriminative approaches for image classification. Throughout, we provide evaluations for our proposed methods on two publicly available HEp-2 cells datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis.
URI: https://hdl.handle.net/10356/66017
DOI: 10.32657/10356/66017
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
XU XIANG (G1101222A) - Thesis.pdfthesis2.9 MBAdobe PDFThumbnail
View/Open

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.