Please use this identifier to cite or link to this item:
Title: Cloud removal in optical remote sensing imagery based on SAR-guided inpainting using deep learning
Authors: Dou, Kaijie
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Dou, K. (2022). Cloud removal in optical remote sensing imagery based on SAR-guided inpainting using deep learning. Master's thesis, Nanyang Technological University, Singapore.
Abstract: With the development of high-resolution optical remote sensing satellite technology, optical satellite images are gradually widely used in navigation and positioning, agricultural investigation, environmental protection, disaster prevention and reduction, marine development, urbanization research and other fields. However, not all images can meet the requirements of intelligent information processing. One of the most important factors is the cloud contamination. Clouds not only block the ground scene, but also change the spectral and texture information of the image to a certain extent, causing many disadvantages to many links in the production of remote sensing image products. The existing methods based on multi-temporal or single image restoration are affected by the change of ground features and the lack of prior information, so it is difficult to restore the real ground feature information under the cloud. At the same time, the Synthetic Aperture Radar (SAR) is not affected by weather factors and can penetrate clouds. However, because it has special microwave imaging mechanism and does not contain color information, it is difficult to visually identify the required information from SAR image. Therefore, taking advantage of the fact that SAR can work in any weather conditions and optical images have high resolution, this dissertation proposes an optical image cloud removal method based on SAR data by deep learning. This method consists of two deep learning modules. In the first module, we use a Generative Adversarial Networks (GAN) to generate a complete, cloud-free edge map by taking SAR data and optical images as input. Then in the second module, we take the edge map just generated in the previous module, SAR image and optical image as the input, and then introduce another GAN network to obtain a cloudless image. Finally, we intercept the contamination part of the optical image with cloud occlusion from the generated image and splice it into the original image to obtain a high-resolution image with promising performance. Key words: Cloud removal, SAR, Generative adversarial network (GAN), Edge detection
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
  Restricted Access
2.39 MBAdobe PDFView/Open

Page view(s)

Updated on May 24, 2024


Updated on May 24, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.