Please use this identifier to cite or link to this item:
Title: Hand gesture recognition using RF-sensing
Authors: Tan, Sheng Rong
Keywords: Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Tan, S. R. (2021). Hand gesture recognition using RF-sensing. Final Year Project (FYP), Nanyang Technological University, Singapore.
Project: SCSE20-0537
Abstract: In recent years, the focus on a device’s Human-Computer Interaction (HCI) has been placed into emphasis as the improvement of devices’ raw performance has started to slow down. This has led to research being done on hands-free devices and controllers. Hand gesture recognition is one of the methods that can be used to improve the overall HCI experience between a user and his/her device. With the rise of deep-learning techniques for analysing data, new possibilities in the field of smart-sensing can be made. Optical or acoustic sensors already exist within popular smartphone devices which many own today. Some of the features that utilise these sensors are face recognition in biometrics security and Voice Assistants. The radar, an alternate sensor that is seemingly less invasive, has been gaining significant research interest due to its non-invasive nature as compared to its counterparts. Radars have not been widely used in commercial products yet. In this project, a framework of machine learning using values from an ultra-wideband (UWB) radar sensor to recognise hand sign gestures shall be presented. Signals data shall be collected using the radar with each signature being a 1-dimensional tensor. These signatures will be pre- processing and then feed into a Convolutional Neural Network (CNN) to extract unique features before being passed to a classifier. Two different CNN architecture shall be compared in terms of correctness in the hand gesture prediction: (i) simple deep Convolutional Network (CN) (ii) extremely deep CN. The shape of the radar tensor and the parameters of the classifiers are optimized to maximize classification accuracy. The classification results of the proposed architecture (i) simple deep CN showed a moderate level of accuracy around 60%. The classification results of the proposed architecture (ii) Residual Network (ResNet) show a high level of accuracy above 90 % and a very low confusion probability even between similar gestures.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
1.49 MBAdobe PDFView/Open

Page view(s)

Updated on May 19, 2022

Download(s) 50

Updated on May 19, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.