Please use this identifier to cite or link to this item:
|Title:||Combining PSR theory with distributional reinforcement learning||Authors:||Zhou, Jingzhe||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence||Issue Date:||2020||Publisher:||Nanyang Technological University||Source:||Zhou, J. (2020). Combining PSR theory with distributional reinforcement learning. Master's thesis, Nanyang Technological University, Singapore.||Abstract:||This work focuses on using Distributional Reinforcement Learning (DRL) in a partially observable environment that is modelled via Predictive State Representation Theory (PSR). We aim to integrate the beneﬁts of DRL and PSR to obtain a model-based reinforcement learning method that is capable of providing complete (distributional) performance information about a policy using an observation-only environment model. PSR theory is one of the advanced techniques used to model a dynamical system on a partially observable environment. Unlike traditional partially observable Markov models, such as POMDP, which capture the uncertainty of the environment using belief states, PSR model describes the partially observable environment based on probabilities of executable and observable future events. Distributional Reinforcement Learning (DRL), proposed by MG Bellemare, is a learning paradigm that aims to improve learning by modelling the rewards as probability distributions instead of scalar expectations.||URI:||https://hdl.handle.net/10356/139946||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.