Please use this identifier to cite or link to this item:
|Title:||Optical-to-SAR image translation In remote sensing via generative adversarial network||Authors:||Li, Jiahua||Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Li, J. (2022). Optical-to-SAR image translation In remote sensing via generative adversarial network. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158390||Abstract:||With the remote sensing technology makes great progress, more and more remote sensing applications are used to satisfy rising needs. Satellite images have been widely applied for various fields, such as urban planning, geological exploration, and military object detection. In remote sensing technology, Synthetic aperture radar (SAR) are one of the most widely used imaging device. Compared with optical imaging technology, it is harder to acquire numerous SAR images due to its high cost in remote satellite. Consequently, the data annotation of EO-SAR dataset may be partially available. In addition, the lack of paired data severely limits the development of AI in remote sensing. In this project, Artificial Intelligent (AI) technology was used for image generation. Generative Adversarial Network (GAN), one of the most widely used network in AI, were explored for image translation which is from optical to SAR. This research attempted to use two GAN networks, CycleGAN and Pix2Pix, for realizing the image generation. Finally, the feasibility and performance of the two proposed network was confirmed in this project on a dataset including optical images and corresponding SAR images. By translating optical images into SAR images, this article aimed to solve the problem of lacking paired datasets in remote sensing field with different modalities. More efficient data enhancement methods can be used for many large-scale AI applications in remote sensing from this research. In future research, multi-modality fusion translation can be utilized by Lidar images, optical images and SAR images. With more features from different modalities, images translation can be more accurate.||URI:||https://hdl.handle.net/10356/158390||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on Dec 1, 2022
Updated on Dec 1, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.