Please use this identifier to cite or link to this item:
Title: Generating human faces by generative adversarial networks
Authors: S Sri Kalki
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: S Sri Kalki (2022). Generating human faces by generative adversarial networks. Final Year Project (FYP), Nanyang Technological University, Singapore.
Project: SCSE21-0843
Abstract: Video Style transfer is the process of merging the content of one video with the style of another to create a stylized video. In this report, I first study various style transfer techniques such as Adaptive Instance Normalisation (AdaIN), AnimeGAN and GAN N’ Roses. After the various approaches are studied, I then understand the first order motion model of its driving video motion sequences. Finally, study the state-of-the-art StyleGAN and the Toonification algorithm in detail. Furthermore, this report proposes to reimplement state-of-the-art methodologies, investigate the impact of relevant hyperparameters, and offer analysis of these hyperparameters. I expand the existing StyleGAN-based Image Toonification models to Video Toonification. I collect datasets in a total of five styles for the process of style transfer. Finally, I conclude by discussing potential directions for further development.
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
69.82 MBAdobe PDFView/Open

Page view(s)

Updated on Apr 17, 2024

Download(s) 50

Updated on Apr 17, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.