Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/178252
Title: Markerless motion capture of hand and object tracking for rehabilitation
Authors: Lim, Guan Ming
Keywords: Engineering
Issue Date: 2024
Publisher: Nanyang Technological University
Source: Lim, G. M. (2024). Markerless motion capture of hand and object tracking for rehabilitation. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/178252
Project: RRG2/16001 
RFP/19003 
RRG4/2201 
Abstract: Capturing hand motion has a myriad of applications ranging from entertainment to healthcare. Current approaches found in hand rehabilitation often involve the placement of a goniometer, wearable sensors, or markers which can be inefficient, hinder natural hand movements, or require time-consuming setup and data post-processing. A promising alternative is markerless motion capture, which seeks to estimate hand poses directly from images. However, existing methods face challenges related to real-time performance, accuracy, and robustness to hand-object interaction. Therefore, this thesis aims to enhance the efficiency and accuracy of markerless motion capture of hand and object tracking for rehabilitation. First, we present a minimal setup that employs an efficient neural network for real-time estimation of 3D hand pose and shape from a single color image. To address accuracy limitations stemming from depth ambiguity in a single-camera setup, we propose a simple method using a mirror-based multi-view setup to measure hand motion. This eliminates the complexity of synchronizing multiple cameras and reduces joint angle errors by half compared to a single-view setup. Additionally, to account for hand-object interaction, we create synthetic depth images of subjects with diverse body shapes to train a neural network to segment forearms, hands, and objects. In practice, the initial pose estimate or object segmentation from the neural network (learning-based) is never perfect, but it can be refined with model fitting (optimization-based). Therefore, we perform rigid object tracking using precomputed sparse viewpoint information to allow real-time tracking while achieving submillimeter accuracy on synthetic datasets. The method is extended to track an articulated hand model with object interaction on a multi-camera setup, achieving an average joint angle error of around 10 degrees when validated against a marker-based motion capture system. Finally, to analyze grasping parameters when the hand is in contact with the object, we develop a sensorized object covered with a pressure sensor array that could generate a 2D pressure distribution map. This helps to provide additional information on the grasping pattern which is not available with color or depth images. Overall, we demonstrate the potential of markerless motion capture system that could complement hand rehabilitation, by providing real-time feedback on dynamic hand motion that is robust to hand-object interaction. Furthermore, the markerless setup is much more portable as compared to marker-based systems, making it possible to use for hand motion capture in clinics or at home.
URI: https://hdl.handle.net/10356/178252
DOI: 10.32657/10356/178252
Schools: School of Mechanical and Aerospace Engineering 
Research Centres: Robotics Research Centre 
Rehabilitation Research Institute of Singapore (RRIS) 
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: embargo_20260701
Fulltext Availability: With Fulltext
Appears in Collections:MAE Theses

Files in This Item:
File Description SizeFormat 
MARKERLESS MOTION CAPTURE OF HAND AND OBJECT TRACKING FOR REHABILITATION.pdf
  Until 2026-07-01
12.83 MBAdobe PDFUnder embargo until Jul 01, 2026

Page view(s)

253
Updated on Mar 16, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.