Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChua, Mark De Wenen_US
dc.identifier.citationChua, M. D. W. (2021). Automatic sign language detector for video call. Final Year Project (FYP), Nanyang Technological University, Singapore.
dc.description.abstractVideo conference has been a big part of our lives since COVID-19 hit but the hearing-impaired does not have the ability to communicate in an efficient way when video conferencing. Singapore Association For The Deaf (SADeaf) state that there was a rise of interest to learn sign language for communication with hearing-impaired family member or co-workers. However, there is a steep learning curve for learning sign language. This project aims to allow real-time interpretation of sign language using You-Only-Look-Once(YOLO) neural networks. The application will be designed to output the word visually and audibly when the user uses sign language on their web camera while video conferencing.en_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Pattern recognitionen_US
dc.titleAutomatic sign language detector for video callen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLam Siew Keien_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Engineering)en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Report - Mark Chua De Wen 16 Apr 2021.pdf
  Restricted Access
2.28 MBAdobe PDFView/Open

Page view(s)

Updated on Jul 3, 2022


Updated on Jul 3, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.