Please use this identifier to cite or link to this item:
|Title:||Machine learning for remote proctoring||Authors:||Zhang, Zhenghang||Keywords:||Engineering::Electrical and electronic engineering
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
|Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Zhang, Z. (2022). Machine learning for remote proctoring. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/161090||Abstract:||Due to the COVID-19 epidemic, an increasing number of schools and organizations have been forced to conduct online tests in recent years. While online examinations are convenient for users and successful in preventing the spread of the virus, they lack a robust proctoring system when compared to traditional offline exams. Many existing online tests are completely monitored by humans, which means the proctors must watch all of the users’ actions on the screen, which is simply inefficient and time-consuming. Our goal is to provide a machine learning based remote proctoring system in this work. It may be used to monitor users during the examination process, and if any unusual behavior is identified, it will be promptly recorded in a log file so that the invigilator can review the video when necessary. Furthermore, this system does not require any additional hardware; all that is required is a camera that comes with the computer. The system is made up of four modules: user identification, suspicious object detection, head pose estimation, and gaze estimate. The face recognition function of the user identification system is implemented using a lightweight VGG (Oxford Visual Geometry Group) network. In the suspicious object detection system,we employ a network pre-trained on the COCO (Microsoft Common Objects in Context) dataset and based on the YOLO (You Only Look Once) algorithm to detect the number of individuals and the presence of forbidden items such as cell phones. Instead of using typical Euler angles or quaternions, we employ the 6D matrix to describe rotation in the head position estimation system. The RepVGG  network is also employed, which may produce structural modifications by reparameterization. On both the AFLW  and BIWI  datasets, our model trained with 300W-LP  outperforms two previous techniques. We employed the L2CS-Net  structure in the gaze estimate system and trained the model using a loss function compounded by classification and regression, and the network performed better on the Gaze360 dataset than another state-of-the-art model. We also performed functional tests for each of the aforementioned systems to verify their effectiveness in the proctoring scenario.||URI:||https://hdl.handle.net/10356/161090||Schools:||School of Electrical and Electronic Engineering||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Updated on Dec 2, 2023
Updated on Dec 2, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.