Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/139259
Title: | Generating human faces by generative adversarial networks | Authors: | Quek, Chin Wei | Keywords: | Engineering::Computer science and engineering | Issue Date: | 2020 | Publisher: | Nanyang Technological University | Project: | SCSE19-0113 | Abstract: | Over the years, computer vision improves significantly. From recognising and understanding what lies underneath an image, we can now generate images by modelling training distribution using generative adversarial network(GAN). Since then, researchers come out with various variants of GAN and ways to stabalize GAN training. This results in improved quality of generated image. The application of GAN has sparked the interest of many people. In this project, we first analyse the use of StarGAN, a unified generative adversarial network for multi-domain image-to-image translation task to generate human facial expressions. We also explore the possible use of StarGAN in cartoon character facial expression generation and video generation. | URI: | https://hdl.handle.net/10356/139259 | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP report.pdf Restricted Access | 22.83 MB | Adobe PDF | View/Open |
Page view(s)
319
Updated on Feb 6, 2023
Download(s) 50
80
Updated on Feb 6, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.