Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/171994
Title: Brain tissue segmentation on CT scans
Authors: Wang, Binli
Keywords: Engineering::Computer science and engineering
Issue Date: 2023
Publisher: Nanyang Technological University
Source: Wang, B. (2023). Brain tissue segmentation on CT scans. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171994
Project: SCSE-8338 
Abstract: Accurate segmentation of brain tissues in medical imaging plays a pivotal role in diagnosis and treatment planning. This research presents a comprehensive study aimed at realizing brain tissue segmentation from CT scans and applying the segmented tissues for Alzheimer's Disease (AD) detection. Our report first explores the use of 3 publicly available software packages - FSL, SPM and FreeSurfer - for automated brain tissue segmentation for CT scans. It reveals initial yet insufficient segmentation results. Acknowledging this limitation, the report shifts its emphasis to the enhancement of the extant cascaded GAN architecture. We explore various data preprocessing techniques and Auto-Augmentation methods to improve the performance of the cascaded GAN based architecture for brain tissue segmentation from CT scans. We also introduce the TransUNet network, a fusion of transformers and U-Net, as an alternative approach for the generator in the GAN architecture. The evaluation metrics underscore its substantial superiority over the segmentation performance achieved using conventional software libraries. The second part of the study focuses on AD detection using the segmented brain tissues. We establish a baseline performance using brain tissue segmentation from MRI as well as brain tissue segmentation from CT scans. This comparative analysis highlights the practicality of using CT scans for brain tissue segmentation and demonstrates the model's effectiveness across different data sources. To enhance model interpretability, we integrate the blur Integrated Gradients (IG) algorithm, allowing us to identify pivotal areas contributing to the AD detection model's decisions. The baseline model demonstrated a commendable level of precision, recall, and F1 score, whereas the model utilizing segmented brain tissues from CT scans requires further enhancement. Our results show that the integration of Auto-Augmentation techniques and the use of TransUNet architecture yield marginal improvements in brain tissue segmentation performance. However, the use of an improved mask derived from FreeSurfer segmentation pipelines results in a slight decline in segmentation accuracy, likely due to the increased complexity of the mask. In summary, our approach consistently yields superior outcomes when juxtaposed with conventional segmentation software libraries. For AD detection using MR scans, our 3D CNN network achieves high precision, recall, and F1 score, indicating its effectiveness in identifying AD cases. The integration of the blur IG algorithm enhances model interpretability by highlighting regions of interest in CT scans. In conclusion, this research offers valuable insights into enhancing brain tissue segmentation from CT scans and demonstrates the utility of segmented tissues for AD detection. The findings contribute to the field of medical image analysis and pave the way for more accurate and interpretable AD diagnosis using CT scans. This research will yield two publications: one academic paper and one technical document (TD).
URI: https://hdl.handle.net/10356/171994
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
BrainTissueSegmentationOnCTscans_AmendedFinalReport_WangBinli.pdf
  Restricted Access
Undergraduate project report2.23 MBAdobe PDFView/Open

Page view(s) 10

821
Updated on Jun 23, 2024

Download(s)

18
Updated on Jun 23, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.