Decoding human motion intention using myoelectric signals for assistive technologies
Antuvan, Chris Wilson
Date of Issue2019-04-17
School of Mechanical and Aerospace Engineering
Robotics Research Centre
Diseases or trauma affecting either the sensory or motor functions in humans leads to movement impairment. They severely affect the independence of the user and the ability to perform activities of daily living. Assistive technologies aim to function in parallel to the affected human body and provide assistance. However, it is important for the robots to understand the user’s motion intention so as to provide assistance in the desired movement; the intention detection is especially challenging in the case of upper-limb motions which are primarily involved in performing dexterous manipulation tasks. Myoelectric signals are capable of providing relevant information about the intent of motion and the extent of effort applied by a person. As such, it is a practically viable solution to utilize electromyographic (EMG) signals to build intuitive human-machine interfaces for applications in prosthetics, orthotics, tele-manipulation and functional electrical stimulation. However, there are quite a lot of challenges with regards to reliably translating the human intention for functional use and efficient control of a multifunctional device. The primary reason is that the EMG signals are time-varying and noisy. Moreover, there is a complex non-linear relationship between the numerous muscles and the corresponding output forces. The aim of this thesis is in providing improvements and solutions to some of the limitations in decoding myoelectric signals, and the work is centered around four themes with focus on the upper limb motions. The first goal is to identify strategies for improving the reliability of myoelectric-based motion decoding. We explored the use of extreme learning machines and quantified its performance for online decoding and evaluating the differences in accuracy while using both muscle synergy and time-domain based features. Secondly, our focus is on building efficient algorithms by using dimensionality reduction techniques to simplify the control complexity; we explored the use of non-negative matrix factorization and linear discriminant analysis for improving and enhancing the decoding capability of EMG-based control interfaces. The third objective is to incorporate simultaneous decoding capability, thereby enable dexterous control of the device by the user. We developed and evaluated the decoding performance of an algorithm capable of classifying both simple and compound movements, by recording only the EMG activity associated to simple movements. The final goal is to implement a user-modulated position and stiffness control of an exoskeleton device, to transfer the impedance characteristics of the human to the device, thereby enabling better transparency and safety in applications involving human-machine interactions. We evaluated the efficacy of simple models to identify the stiffness characteristics from the user's EMG signals, in a trajectory tracking task.