Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/139739
Full metadata record
DC FieldValueLanguage
dc.contributor.authorXiong, Haitaoen_US
dc.date.accessioned2020-05-21T05:59:45Z-
dc.date.available2020-05-21T05:59:45Z-
dc.date.issued2020-
dc.identifier.urihttps://hdl.handle.net/10356/139739-
dc.description.abstractComputer vision has been believed as a helpful assistance for doctors’ diagnosis in recent years. Lately, the deep convolutional neural networks (CNNs) have been shown to improve the performance in a large amount of computer vision tasks for example: object detection, image classification and semantic segmentation. In the medical filed, rapid and accurate diagnosis can be critical for disease identification and patient treatment. Therefore, this project studied one of the fundamental issues i.e. semantic segmentation, applied on Chest X-ray images and Cell images and proposed a semi-supervised adversarial segmentation neural network. One of the commonly used architectures to deal with semantic segmentation is U-net, but it can only be used with labeled dataset. However, in real situation, the labeled medical images can be limited because medical image labeling is time-consuming and without medical knowledge, it is not a trivial task. In the project, we propose to make use of the unlabeled medical images to improve the tissue and organ segmentation. We have used U-net architecture with residual neural network (ResNet) and VGG16 network as the backbone and integrated the U-net with a generative adversarial network (GAN) to make use of the unlabeled dataset. This segmentation network incorporates an adversarial network to discriminate whether the label comes from ground truth or segmentation network. In addition, the unlabeled medical images are used during the adversarial process to generate synthesized label. Through this adversarial process, not only the unlabeled data has a role to play, but the segmentation network is guided to generate more realistic segmentation.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationB3136-191en_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleMachine learning based x-ray/CT image analysisen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorHuang Weiminen_US
dc.contributor.supervisorLin Zhipingen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Electrical and Electronic Engineering)en_US
dc.contributor.organizationInstitute for Infocomm Research, Agency for Science, Technology and Researchen_US
dc.contributor.supervisoremailezplin@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP_Final_Report.pdf
  Restricted Access
1.83 MBAdobe PDFView/Open

Page view(s)

219
Updated on Mar 28, 2023

Download(s)

9
Updated on Mar 28, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.