Please use this identifier to cite or link to this item:
Title: Image-to-image translation based on generative models
Authors: Tang, Mengxiao
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Tang, M. (2021). Image-to-image translation based on generative models. Master's thesis, Nanyang Technological University, Singapore.
Abstract: Image-to-image translation tasks have become a widely studied topic in computer vision. Image-to-image translation aims at finding a model that is fed with the input image and generating desired output image correspondingly. Previous studies that are based on deep neural networks were mostly built upon encoder-decoder architecture, where a direct mapping from input to target output is learned, without exploring the distribution of images. In this thesis, generative models are used to capture the distribution of images, and the potentials of generative models on the image-to-image translation tasks are explored. Specifically, an improved CycleGAN is proposed to conduct the style transfer task and a DDPM-based conditional generative model is used for image colorization. Empirical results show that the generative models can achieve competitive results in image-to-image translation tasks.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
Image-to-Image Translation Based on Generative Models.pdf
  Restricted Access
14.54 MBAdobe PDFView/Open

Page view(s)

Updated on Jan 27, 2022


Updated on Jan 27, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.