Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/139736
Title: | Collision avoidance for automated guided vehicles using deep reinforcement learning | Authors: | Qin, Yifan | Keywords: | Engineering::Electrical and electronic engineering | Issue Date: | 2020 | Publisher: | Nanyang Technological University | Project: | A1237-191 | Abstract: | It is crucial yet challenging to develop an efficient collision avoidance policy for robots. While centralized collision avoidance methods for multi-robot systems exist and they are often more accurate and error-free, decentralized methods have the potential to reduce the prohibitive computation where each robot generates paths without observing other robots’ states. As the first step towards a decentralized multi-robot collision avoidance system, this project aims to implement Deep Reinforcement Learning in the collision avoidance simulation of a single robot. The robot scans the environment around it and is supposed to find its way in a pre- designed map with multiple obstacles and branches. Several algorithms are tested and discussed in this project including Q Learning, SARSA, Deep Q Network (DQN), Policy Gradient (PG), Actor Critic, Deep Determinist Policy Gradient (DDPG), Distributed Proximal Policy Optimization (DPPO). Thorough comparisons between DQN, DDPG and DPPO are presented in this project. | URI: | https://hdl.handle.net/10356/139736 | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP_Report_QINYifan.pdf Restricted Access | Final Year Project Report | 10.42 MB | Adobe PDF | View/Open |
Page view(s)
277
Updated on Mar 25, 2023
Download(s) 50
32
Updated on Mar 25, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.