Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLim, You Rongen_US
dc.identifier.citationLim, Y. R. (2022). Learning transferable skills in complex 3D scenarios via deep reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore.
dc.description.abstractDeep Reinforcement Learning combines reinforcement learning, the framework that assists an intelligent agent towards its goal, with a deep neural network. The deep neural network follows a black-box model, performing complex functional approximation calculations to achieve the best results by minimizing the output error through back-propagation. This process is both time and computationally expensive as it could take millions of iterations for the agent to master complex tasks. Recent success in Transfer Learning with Deep Reinforcement Learning has demonstrated the capability of jump-starting the learning process, resulting in better overall performance. Additionally, using previously attained knowledge can allow the agent to achieve minimum threshold performance in fewer training steps. These successes reduce the training steps required to master a complex task, saving computation resources and time. Therefore, my contributions would be investigating Deep Reinforcement Learning and its ability to learn and apply transferable skills within a complex environment involving sparse rewards and domain randomization through Transfer Learning. The study includes attaining transferable skills with Curriculum Learning and Reward Shaping to tackle the sparse rewards problem. Popular reinforcement learning algorithms Proximal Policy Optimisation (PPO) and Soft Actor-Critic(SAC) enabled the agent to learn the policy required to pass the minimum threshold. Following that, Transfer Learning was performed on the agent and trained in new scenarios. These experiments evaluate the capability of the policy to generalize a problem and encourage the agent to alter its existing policy under the new settings. The new settings involved inclined surfaces and changing the agent shape from ovoid to cubic. The results demonstrate that agent with transfer learning outperforms the untrained model under various metrics where the agent successfully adapted to changes by grasping observations without external interference.en_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleLearning transferable skills in complex 3D scenarios via deep reinforcement learningen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorBo Anen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
  Restricted Access
10.22 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 28, 2022


Updated on Jun 28, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.