Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKoh, Hui Ling.
dc.description.abstractThis project explores the use of a 3D head model for lip sync animation, given an audio file and its phoneme segmentation data obtained from word-level transcription. A comparison is done with the traditional method of rendering a sequence of 2D images for animation, showing how using a 3D model can be advantageous. Research was done on how animators map all the phonemes into a set of visemes, and how software and modelling techniques can be used to blend these visemes to produce a talking head with realistic-looking mouth movements. Some commercial uses for a 3D talking head and suggestions for further enhancements are also discussed.en_US
dc.format.extent46 p.en_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Computer graphicsen_US
dc.titleTalking headen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorChng Eng Siongen_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Engineering)en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
  Restricted Access
1.99 MBAdobe PDFView/Open

Page view(s) 50

Updated on Nov 30, 2020

Download(s) 20

Updated on Nov 30, 2020

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.