Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/158052
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKok, Melvin Xinweien_US
dc.date.accessioned2022-05-26T06:09:02Z-
dc.date.available2022-05-26T06:09:02Z-
dc.date.issued2022-
dc.identifier.citationKok, M. X. (2022). Deep generative model for remote sensing. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158052en_US
dc.identifier.urihttps://hdl.handle.net/10356/158052-
dc.description.abstractThe practice of identifying and monitoring an area's physical features by detecting its reflected and transmitted radiation from a distance is known as remote sensing (typically from satellite or aircraft). Researchers can "sense" characteristics about the Earth by using special cameras to acquire remotely sensed imagery. While the concept of satellite imagery would bring typical Electro-Optical (EO) Red Green Blue (RGB) images to mind, in remote sensing, there are many other important computational imaging systems such as synthetic aperture radar imaging (SAR), multispectral image fusion, as well as infra-red imaging. These non-EO-RGB imaging systems all have their unique advantages and properties, but the most common imaging system is SAR imaging, which is the focus of this project. SAR imaging is particularly useful due to its ability to always captures image of the Earth’s surface, regardless of day and night, and regardless of weather condition. This is in contrast to the EO imagery, whose quality is subject to changes in illumination from the sun and also changes in weather conditions like cloud cover. However, unlike EO images which have good availability due to large commercial projects (e.g., Google Maps), SAR image data is more scarce and more expensive to obtain. In the project, image-to-image translation is performed on EO images to transfer them to the SAR domain to provide more data for machine learning models on learn on SAR datasets. To transfer large-scale optical RGB images to the desired imaging modality i.e., SAR, a series of image-to-image translation techniques based on GANs were tested. These techniques include Pix2Pix and CycleGAN. Through testing, it was determined that using GANs to perform image-to-image translation on satellite imagery was possible but requires refining to capture all the features from the source domain adequately.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleDeep generative model for remote sensingen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorWen Bihanen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Electrical and Electronic Engineering)en_US
dc.contributor.supervisoremailbihan.wen@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Melvin Kok FYP Final Report.pdf
  Restricted Access
1.72 MBAdobe PDFView/Open

Page view(s)

34
Updated on Jan 31, 2023

Download(s)

11
Updated on Jan 31, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.