Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/152688
Title: Development of virtual immunofluorescence images from hematoxylin and eosin-stained images for cancer diagnosis
Authors: Azam, Abu Bakr
Keywords: Science::Medicine::Computer applications
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Azam, A. B. (2021). Development of virtual immunofluorescence images from hematoxylin and eosin-stained images for cancer diagnosis. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152688
Abstract: Haematoxylin and Eosin (H&E) staining is common and viewing these stained images under brightfield microscopes provide basic information of the tumours and other nuclei. In contrast, Immunohistochemical (IHC) images are crucial for cancer diagnosis as it could reveal more information about tumours and its response to treatment. Multiplex Immunofluorescence (mIF), a part of IHC provides a more detailed understanding of the tumour using darkfield microscopy and florescent cameras as opposed to RGB cameras with special monoclonal antibody-based stains. This helps pathologists focus on multiple “biomarkers” or indicators of certain biological processes like immune response. If the same biopsy specimen is used for inspection, the related features obtained from H&E staining and multiplex IF can be utilized to create a Computer Aided Diagnosis (CAD) system including Convolutional Neural Networks which are popularly used in object detection and image segmentation tasks. The study is divided into two parts, automated optical flow-based image registration and CD3 (biomarker for T cells) region prediction using a special type of a convolutional neural network called as generative adversarial networks (GANs). Concepts of optical flow, k-means clustering, and Otsu thresholding are combined to create a faster and robust intensity-based image registration pipeline, in which the DAPI (4′,6-diamidino-2-phenylindole) channel of the mIF image is co-registered with the corresponding H&E image, following which the other channels in mIF image are transformed to match the registration. Finally, the CD3 channel image is superimposed with the matching H&E image to create the reference image needed for deep learning. Two variations of GAN, the Pix2Pix GAN and cycleGAN models are modified to work with the registered image dataset to predict CD3 regions. As mIF images are available only by using expensive and complex machines, inexpensive and easy to obtain H&E images can now be used in conjunction with GAN models to obtain similar data, which could significantly reduce the costs of cancer treatment since this method not only helps in getting multi-modal image data based on only one type of image, but it also helps in making cancer immunotherapy, a form of cancer treatment dependent on these images mainstream.
URI: https://hdl.handle.net/10356/152688
DOI: 10.32657/10356/152688
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:MAE Theses

Files in This Item:
File Description SizeFormat 
Amended_thesis_Azam Abu Bakr - Assoc Prof Cai Yiyu.pdf3.3 MBAdobe PDFView/Open

Page view(s)

141
Updated on May 17, 2022

Download(s) 50

65
Updated on May 17, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.