Please use this identifier to cite or link to this item:
|Title:||Modelling human behaviour for shared control wheelchair||Authors:||Kabilan, Anbukani||Keywords:||Engineering::Mechanical engineering::Robots||Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Kabilan, A. (2022). Modelling human behaviour for shared control wheelchair. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157596||Project:||C004||Abstract:||Human Robot Interaction Algorithms that take into account input from humans can be challenging to test since it is expensive, time–consuming and difficult to recruit human subjects. This makes fine tuning these algorithms difficult, as it requires several rounds of testing. Automating the generation of human input can help in significantly faster testing of such algorithms. The most common way of automating the generation of human input is through imitation learning. Using human input data from a few trials, we can train a model which can provide human actions for a given situation. Automatic generation of human input is an extremely broad research area with the ultimate aim of creating a Digital Human Twin which can be used for testing various algorithms used for chatbots, healthcare devices etc. In this work we focus on assistive robotic wheelchair application. Using the data collected during human trials at Rehabilitation Research Institute of Singapore (RRIS), where human provides joystick input to control the wheelchair with a shared control algorithm, a model can be learned which can generate human joystick input given the state and the surroundings of the wheelchair. Such a model can then be used to automate the testing of algorithms for shared control of the wheelchair. Since different humans have different ways of controlling the wheelchair depending on their preferences, ability etc., there are variations in the data collected. To enable proper testing, our learned model should be able to generate these variations. Thus, the scope of this project was to develop an Imitation Learning model that can discover and disentangle the latent factors of variations in the demonstrations. In this work, we use Information Maximizing Generative Adversarial Imitation Learning (InfoGAIL), since it considers variations present in the data. We process the data collected in Rosbags and convert it into a format that can be used for training an InfoGAIL model. For training and evaluation of the InfoGAIL model, we integrate the neural network with the shared control system implemented in gazebo simulator using ROS framework. After training the model, we generate the variations and verify that they correspond to the variations present in the data. We also discover the limitations of learning an InfoGAIL model when the trajectories are long and discuss ways to overcome this limitation.||URI:||https://hdl.handle.net/10356/157596||Schools:||School of Mechanical and Aerospace Engineering||Research Centres:||Rehabilitation Research Institute of Singapore (RRIS)||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Student Reports (FYP/IA/PA/PI)|
Updated on Sep 30, 2023
Updated on Sep 30, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.