Please use this identifier to cite or link to this item:
Title: Automatic sign language detector for video call
Authors: Chua, Mark De Wen
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Chua, M. D. W. (2021). Automatic sign language detector for video call. Final Year Project (FYP), Nanyang Technological University, Singapore.
Abstract: Video conference has been a big part of our lives since COVID-19 hit but the hearing-impaired does not have the ability to communicate in an efficient way when video conferencing. Singapore Association For The Deaf (SADeaf) state that there was a rise of interest to learn sign language for communication with hearing-impaired family member or co-workers. However, there is a steep learning curve for learning sign language. This project aims to allow real-time interpretation of sign language using You-Only-Look-Once(YOLO) neural networks. The application will be designed to output the word visually and audibly when the user uses sign language on their web camera while video conferencing.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Report - Mark Chua De Wen 16 Apr 2021.pdf
  Restricted Access
2.28 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 27, 2022


Updated on Jun 27, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.