Please use this identifier to cite or link to this item:
|Title:||Visual analysis by using artificial intelligence (AI) : face generation and recognition under pose variation||Authors:||You, Yuquan||Keywords:||Engineering::Electrical and electronic engineering::Computer hardware, software and systems||Issue Date:||2020||Publisher:||Nanyang Technological University||Project:||P3034-182||Abstract:||The main idea of the project is to employ the face generation function with the side view image of the face captured by the monitor to get the frontal view image of the subject and then automatically recognize the identity by face recognition module. The proposed framework named TPFNet contains two modules as face generation and face recognition. The face generation module is based on Two-Pathway Generative Adversarial Network (TPGAN) and the other is based on a combination module of Multi-task Cascaded Convolutional Networks (MTCNN) and FaceNet. The dataset was collected from the internet. To achieve a good result, the project used the Multi-PIE dataset which contains more than 17000 images under different pose and illumination. In order to understand the process of the face generation, the report will describe the basic concepts of Generative Adversarial Network (GAN) and explain the special points of the architecture of TPGAN. Because the pre-trained model of TPGAN has not been released on the internet, I will share my experimental details and highlight the challenges during the training. The face recognition module employed both MTCNN and FaceNet. The MTCNN is used for face detection and FaceNet is a unified framework for identification and verification. The report will explain the whole process of face recognition in the project and describe the main idea of the MTCNN and FaceNet as well as experimental details. Lastly, the project has great potential because this project demonstrates strong performance in the area of face generation, face detection, and face recognition. The recommendation of the future direction of improvement will be stated in the last chapter.||URI:||https://hdl.handle.net/10356/140320||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on Jan 28, 2023
Updated on Jan 28, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.