Please use this identifier to cite or link to this item:
|Title:||Vision-based robot navigation via deep reinforcement learning||Authors:||Gan, Zhen Hao||Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Gan, Z. H. (2021). Vision-based robot navigation via deep reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148980||Abstract:||Efficient and safe robotic navigation in pedestrian-rich environments has always the crown on the jewel problem for researchers to tackle. The uprising on logistical and transportation usage of robots has indicated the importance of robotic navigation being used in real-life environments such as hospitals or train stations. When a robot is navigating in the human environment, the robot needs to generate some awareness to the environment for anticipating human intention and develop a safe navigation maneuver as close to the human social norm as possible. The recent spark of involving reinforcement learning in robotic navigation has been proven effective in handling these more complex and more dynamic environments. However, the currently proposed solutions either assume there is perfect information on the detection of human intents which is near to impossible or consider using multiple sensors input for the detection which caused the increase of cost on the robotic system. These problems fuel the motivation to propose a method that can deal with the complexity of the dynamic environment in robot navigation while keeping the overall cost of the robot low using visual sensors such as cameras. Our proposed solution was divided into 2 parts: the first part includes a visual perception system with a camera such as a pedestrian detector using YOLOv4 based on Convolutional Neural Network (CNN) and the second part uses the output from the first part and RL algorithms to achieve automated navigation for the AGV to reach its goal as fast as possible while avoiding collision in the environment. The proposed method also hypotheses that the algorithm can work better without taking into consideration of global observation of all dynamic obstacles and focuses only on the local observation that is in the near vicinity of the robot which mimics human-like navigation to the greatest extent. The proposed approach is verified through several scenarios on Turtlebot3 Waffle in the Gazebo environment.||URI:||https://hdl.handle.net/10356/148980||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on May 19, 2022
Updated on May 19, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.