Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/158052
Title: Deep generative model for remote sensing
Authors: Kok, Melvin Xinwei
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Kok, M. X. (2022). Deep generative model for remote sensing. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158052
Abstract: The practice of identifying and monitoring an area's physical features by detecting its reflected and transmitted radiation from a distance is known as remote sensing (typically from satellite or aircraft). Researchers can "sense" characteristics about the Earth by using special cameras to acquire remotely sensed imagery. While the concept of satellite imagery would bring typical Electro-Optical (EO) Red Green Blue (RGB) images to mind, in remote sensing, there are many other important computational imaging systems such as synthetic aperture radar imaging (SAR), multispectral image fusion, as well as infra-red imaging. These non-EO-RGB imaging systems all have their unique advantages and properties, but the most common imaging system is SAR imaging, which is the focus of this project. SAR imaging is particularly useful due to its ability to always captures image of the Earth’s surface, regardless of day and night, and regardless of weather condition. This is in contrast to the EO imagery, whose quality is subject to changes in illumination from the sun and also changes in weather conditions like cloud cover. However, unlike EO images which have good availability due to large commercial projects (e.g., Google Maps), SAR image data is more scarce and more expensive to obtain. In the project, image-to-image translation is performed on EO images to transfer them to the SAR domain to provide more data for machine learning models on learn on SAR datasets. To transfer large-scale optical RGB images to the desired imaging modality i.e., SAR, a series of image-to-image translation techniques based on GANs were tested. These techniques include Pix2Pix and CycleGAN. Through testing, it was determined that using GANs to perform image-to-image translation on satellite imagery was possible but requires refining to capture all the features from the source domain adequately.
URI: https://hdl.handle.net/10356/158052
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Melvin Kok FYP Final Report.pdf
  Restricted Access
1.72 MBAdobe PDFView/Open

Page view(s)

25
Updated on Dec 9, 2022

Download(s)

11
Updated on Dec 9, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.