Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153249
Title: Deep image enhancement
Authors: Han, Jun
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Han, J. (2021). Deep image enhancement. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153249
Project: SCSE20-0824
Abstract: Deep-learning based methods have brought a huge improvement in the field of image restoration and enhancement. Recent methods explore generative priors from pre-trained generator such as StyleGAN for the task of restoration. In this work, I follow this direction and delve deeper to gain more insights. I first conduct experiments and analysis on a relatively mature task – image denoising. My experiments demonstrate that the generative priors encapsulated in a generative network (StyleGAN) is able to improve the performance in not only super-resolution but also denoising. Furthermore, I analyze the sensitivity of such networks toward the changes of the input image. I find that even a subtle change in the input could lead to substantial changes in the output. Motivated by my findings, I shift the focus to the task of real-world face image restoration, and I devise a simple yet effective image manipulation method that could largely improve the performance of the outputs of a pre-trained model.
URI: https://hdl.handle.net/10356/153249
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP_REPORT_SCSE20-0824_Deep Image Enhancement_Han Jun.pdf
  Restricted Access
3.04 MBAdobe PDFView/Open

Page view(s)

290
Updated on Mar 16, 2025

Download(s) 50

32
Updated on Mar 16, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.