Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/141311
Title: | Facial expression conversion with generative adversarial network | Authors: | Xi, Yihong | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | Issue Date: | 2020 | Publisher: | Nanyang Technological University | Project: | ISM-DISS-01883 | Abstract: | StarGAN realizes image conversion among multiple domain image, but the combined and coordinated actions of facial muscles are still discrete and limited in datasets. The function of facial expression conversion of StarGAN cannot satisfy the command of realism in areas, such as movie industry, fashion industry and so on. Besides, GANs like CycleGAN and WGAN-GP, are provided as basic models for GANimation. Successful generator network architecture like Resnet and discriminator architecture like Patch discriminator are the basic neural network of this project. Moreover, the idea of using action units and its intensity as representation of different facial expression, not traditional feature extraction method improves conversion accuracy and speed. In this project, we train and test a model on dataset CelebA to realize facial expression conversion, which describes anatomically human facial expressions in a continuous domain. And only the intensity of activated AUs are used. | URI: | https://hdl.handle.net/10356/141311 | Schools: | School of Electrical and Electronic Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Facial Expression Conversion with Generative Adversarial Network.pdf Restricted Access | 1.51 MB | Adobe PDF | View/Open |
Page view(s)
304
Updated on Mar 16, 2025
Download(s)
7
Updated on Mar 16, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.