Please use this identifier to cite or link to this item:
Title: Deep image inpainting
Authors: Chua, Hao Yang
Keywords: Engineering::Computer science and engineering
Issue Date: 2019
Abstract: Over the years, many techniques have emerged to reconstruct and modify images for a myriad of applications. One ingenious application is image inpainting, which is to restore the missing parts of an image. The latest approach employs deep learning technique to solve the problem. Deep convolutional neural networks are used to capture the abstract details of many training images, so that it can guess the context of a missing region. The performance of the network heavily relies on the information provided upon training. Most work failed to utilize or realize the importance of prior information that may boost the proficiency of neural networks. This project attempts to use segmentation maps as a feature engineering to create supplementary information to aid the image inpainting process. The method of inpainting process proposed will consist of two stages. First is to generate the segmentation maps of the missing region. Second is to take the prior segmentation maps generated for fusion into the inpainting process. Training and evaluation are done on ADE20K dataset with eight categories of segmentation defined and all other bodies as the background category.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Deep Image Inpainting.pdf
  Restricted Access
4.22 MBAdobe PDFView/Open

Page view(s)

Updated on Nov 23, 2020


Updated on Nov 23, 2020

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.