Please use this identifier to cite or link to this item:
Title: Deep learning for x-ray vision
Authors: Ng, Kenneth Chen Ee
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Ng, K. C. E. (2021). Deep learning for x-ray vision. Final Year Project (FYP), Nanyang Technological University, Singapore.
Abstract: Recent discussions have surfaced that the location of a crack in additive material begins from a pore. The resulting stress on the pore initiates a crack growing towards the next nearest pore, which eventually leads to a point of failure. The objective of the study is to evaluate the feasibility of using simulated X-ray CT scans as a possible addition to real images for training data in detection of pores in CT images. A 3D model consisting of realistic pore-like structures were created in TinkerCAD and uploaded to aRTist where the simulated CT scan was performed to yield simulated CT images. The images were then pre-processed using VGStudio MAX and ImageJ software. Using the trainable weka segmentation plugin, each image was labelled semi-automatically. The images were then manually corrected and transformed into mask images for training. Different segmentation models such as U-net and DeepLabV3 were then explored to perform the segmentation task. Comparing the results using the probability of detection score, we arrive on the conclusion that detection of pores heavily relies on real data as opposed to simulated data.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Deep Learning for X-ray Vision.pdf
  Restricted Access
FYP Report3.13 MBAdobe PDFView/Open

Page view(s)

Updated on May 28, 2022


Updated on May 28, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.