Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/158151
Title: | Learning social norms through simulated crowd interaction | Authors: | Dinesdkumar Jayakumaran | Keywords: | Engineering::Electrical and electronic engineering | Issue Date: | 2022 | Publisher: | Nanyang Technological University | Source: | Dinesdkumar Jayakumaran (2022). Learning social norms through simulated crowd interaction. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158151 | Project: | B3292-211 | Abstract: | Autonomous Mobile robots are increasingly populating our human environments. A safe and efficient navigation system would be an essential capability that an autonomous mobile robot should be equipped with. This navigation system should be required to follow commonly accepted rules or adhere to social norms Previous research has established that machine learning methods and simulation platforms are used to develop navigation system for autonomous robots. However, the previous research lacks some key aspects. Previous research that utilizes simulation software to collect data points, utilizes non-realistic environment settings or do not have a realistic crowd movement. Not only that, the crowd or human actors in the simulated environments are not depicting any realistic real-life scenarios. This FYP aims to investigate how can a mobile robot navigate in a crowded situation in a socially compliant manner. The three areas this final year project would be focusing on are 1. To create a realistic indoor crowd-simulation based on a simulator. 2. Utilize current state of the art navigation system and evaluate its effectiveness 3. Utilize deep reinforcement learning methods to train a model to allow a robot to navigate in a crowded situation. In this Final Year Project, we have developed a new realistic hospital ward environment in the gazebo simulator. We have also incorporated human actors who have similar behaviour to humans when encountered with an obstacle. These human actors have performed tasks that are like their real-life counterparts based on a survey done 21 healthcare professionals. We have also utilized the ROS navigation stack to evaluate its effectiveness in the hospital ward environment. We have also concluded that the ROS navigation stack is not able to deal with obstacles that ignore the robot. We have also identified that the robot tends to get stuck and forgoes its goal. To solve this, we implemented a Deep Q -Learning model. After 3000 episodes we were able to see some improvements. | URI: | https://hdl.handle.net/10356/158151 | Schools: | School of Electrical and Electronic Engineering | Organisations: | I2R-ASTAR | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP Report Final.pdf Restricted Access | 1.77 MB | Adobe PDF | View/Open |
Page view(s)
45
Updated on Sep 30, 2023
Download(s)
9
Updated on Sep 30, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.