Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/158211
Title: | Application of reinforcement learning for autonomous combat | Authors: | Huang, Andrian | Keywords: | Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics | Issue Date: | 2022 | Publisher: | Nanyang Technological University | Source: | Huang, A. (2022). Application of reinforcement learning for autonomous combat. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158211 | Project: | A3040-211 | Abstract: | The RoboMaster University AI Challenge (RMUA) is an annual international robotics competition involving 2-versus-2 battle between autonomous robots armed with projectile launcher, where the goal is to cooperate with the ally robot and deplete the enemy robots’ health by shooting projectiles at them. Due to the nature of the competition, the robots must be able to autonomously perform all tasks pertaining to the competition. With the recent rise in popularity of deep learning, it seems compelling to apply deep reinforcement learning for such tasks, which are doable by traditional methods yet might be difficult to explicitly program or fine-tune. Despite this, the combat robots in RMUA predominantly still use traditional methods as they are tried-and-tested in the competition. Deep reinforcement learning should however still be able to bring benefit to the RMUA combat robot due to its ability to let agents learn without explicit programming, including for rather complex environments. With that in mind, this project seeks to explore the possibility of deep reinforcement learning in RMUA combat robot by applying it for a certain combat task: autonomous enemy aiming and tracking, a task proven difficult due to bullet drop, small hitbox on the enemy robot’s armour, and the bullet’s finite velocity. The result of this project suggests that deep reinforcement learning can be used as an alternative to classical methods, although it may not beat classical methods especially in a simulated environment. Deep reinforcement learning for this task has a lot of room for improvement and can potentially be combined with classical methods. | URI: | https://hdl.handle.net/10356/158211 | Schools: | School of Electrical and Electronic Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Final Report.pdf Restricted Access | FYP - Application of Reinforcement Learning for Autonomous Combat | 3.03 MB | Adobe PDF | View/Open |
Page view(s)
103
Updated on Sep 25, 2023
Download(s)
10
Updated on Sep 25, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.