Please use this identifier to cite or link to this item:
Title: Automatic visual speech recognition
Authors: Irwan Widjojo
Lee, Kean Hin
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2016
Abstract: One of the most challenging tasks in automatic visual speech recognition is the extraction of feature parameters from image sequences of lips. There are primarily two approaches to extract visual speech information from image sequences, i.e. model-based approach and pixel-based approach. The advantage of mode1-based approach is that the parameters of the contour model of the lip are less influenced by the variability of lighting condition, lip location and rotation but the construction of an efficient and yet robust lip contour that is capable of tracking the lip has made this task difficult. The pixel-based approach on the other hand must take the variability of lighting condition, lip rotation and location into account. Despite many researches undertaken, lip tracking remains a challenging task due to the diverse variation of face images. The pixel based approach was adopted in this project. Raw data for visual speech recognition were obtained using digital camcorder. These video recordings were converted to image sequences and the lip of the speaker on each frame was extracted. The lip boundaries were obtained after the lip on each frame was located. The contour of the lip was drawn based on the lip boundaries using least square polynomial. Ten important visual speech features for all frames were extracted and then quantized. These vector sequences were ready to be used for training of HMMs. The trained models were used for recognition of unknown vector sequences.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
16.56 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 20, 2021


Updated on Jun 20, 2021

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.