Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/158390
Title: Optical-to-SAR image translation In remote sensing via generative adversarial network
Authors: Li, Jiahua
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Li, J. (2022). Optical-to-SAR image translation In remote sensing via generative adversarial network. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158390
Abstract: With the remote sensing technology makes great progress, more and more remote sensing applications are used to satisfy rising needs. Satellite images have been widely applied for various fields, such as urban planning, geological exploration, and military object detection. In remote sensing technology, Synthetic aperture radar (SAR) are one of the most widely used imaging device. Compared with optical imaging technology, it is harder to acquire numerous SAR images due to its high cost in remote satellite. Consequently, the data annotation of EO-SAR dataset may be partially available. In addition, the lack of paired data severely limits the development of AI in remote sensing. In this project, Artificial Intelligent (AI) technology was used for image generation. Generative Adversarial Network (GAN), one of the most widely used network in AI, were explored for image translation which is from optical to SAR. This research attempted to use two GAN networks, CycleGAN and Pix2Pix, for realizing the image generation. Finally, the feasibility and performance of the two proposed network was confirmed in this project on a dataset including optical images and corresponding SAR images. By translating optical images into SAR images, this article aimed to solve the problem of lacking paired datasets in remote sensing field with different modalities. More efficient data enhancement methods can be used for many large-scale AI applications in remote sensing from this research. In future research, multi-modality fusion translation can be utilized by Lidar images, optical images and SAR images. With more features from different modalities, images translation can be more accurate.
URI: https://hdl.handle.net/10356/158390
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Finall Report_Li Jiahua.pdf
  Restricted Access
3.79 MBAdobe PDFView/Open

Page view(s)

50
Updated on Dec 1, 2022

Download(s)

8
Updated on Dec 1, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.