Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/148038
Title: | Automatic sign language detector for video call | Authors: | Chua, Mark De Wen | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Pattern recognition |
Issue Date: | 2021 | Publisher: | Nanyang Technological University | Source: | Chua, M. D. W. (2021). Automatic sign language detector for video call. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148038 | Abstract: | Video conference has been a big part of our lives since COVID-19 hit but the hearing-impaired does not have the ability to communicate in an efficient way when video conferencing. Singapore Association For The Deaf (SADeaf) state that there was a rise of interest to learn sign language for communication with hearing-impaired family member or co-workers. However, there is a steep learning curve for learning sign language. This project aims to allow real-time interpretation of sign language using You-Only-Look-Once(YOLO) neural networks. The application will be designed to output the word visually and audibly when the user uses sign language on their web camera while video conferencing. | URI: | https://hdl.handle.net/10356/148038 | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP Report - Mark Chua De Wen 16 Apr 2021.pdf Restricted Access | 2.28 MB | Adobe PDF | View/Open |
Page view(s)
124
Updated on May 20, 2022
Download(s)
13
Updated on May 20, 2022
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.